From 8301c9e41f2faa4dd72dfdb94d518afa3f8f510b Mon Sep 17 00:00:00 2001 From: Shagun Sodhani Date: Sun, 12 Feb 2023 14:07:52 -0500 Subject: [PATCH] Add toolformer paper --- _site/404.html | 4 + _site/README.md | 227 + _site/assets/BatchNormalization/eq1.png | Bin 0 -> 2726 bytes _site/assets/BatchNormalization/eq2.png | Bin 0 -> 1331 bytes .../FewThingsAboutML/BiasVarianceDiagram.png | Bin 0 -> 132888 bytes _site/assets/HNN/equation1.png | Bin 0 -> 11990 bytes _site/assets/HNN/equation2.png | Bin 0 -> 12955 bytes _site/assets/RNTN/MVRNN.png | Bin 0 -> 13148 bytes _site/assets/RNTN/P1RNTN.png | Bin 0 -> 10300 bytes _site/assets/RNTN/P2RNTN.png | Bin 0 -> 11538 bytes _site/assets/RNTN/ParseTreeMVRNN.png | Bin 0 -> 10982 bytes _site/assets/RNTN/RNN.png | Bin 0 -> 11878 bytes _site/assets/RNTN/RNNModels.png | Bin 0 -> 43854 bytes _site/assets/Swish/plot.png | Bin 0 -> 93150 bytes _site/assets/topk/eq1.png | Bin 0 -> 23479 bytes _site/assets/topk/eq2.png | Bin 0 -> 54734 bytes .../04/27/VQA-Visual-Question-Answering.html | 106 + ...aseline-for-Visual-Question-Answering.html | 34 + .../07/Conditional-Similarity-Networks.html | 103 + ...standing-in-Visual-Question-Answering.html | 38 + .../2017/05/23/Neural-Module-Networks.html | 74 + ...pendency-Parser-using-Neural-Networks.html | 67 + ...-Model-for-Natural-Language-Inference.html | 66 + ...tions-of-Word2Vec-for-Syntax-Problems.html | 9 + .../07/01/One-Model-To-Learn-Them-All.html | 177 + ...works-for-Natural-Language-Processing.html | 153 + ...tribution-Examples-in-Neural-Networks.html | 120 + ...Stop-Reading-in-Machine-Comprehension.html | 129 + ...rehension-with-Self-matching-Networks.html | 90 + ...to-Compute-Word-Embeddings-On-the-Fly.html | 79 + _site/site/2017/08/27/Pointer-Networks.html | 64 + ...tworks-for-Neural-Machine-Translation.html | 85 + ...rmulation-with-Reinforcement-Learning.html | 114 + ...pedia-to-Answer-Open-Domain-Questions.html | 74 + ...wish-A-self-gated-activation-function.html | 45 + ...-Representation-Learning-for-Networks.html | 55 + ...epresentations-via-Gaussian-Embedding.html | 62 + ...e-Building-Blocks-of-Complex-Networks.html | 34 + ...rder-organization-of-complex-networks.html | 62 + ...fer-Learning-in-Machine-Comprehension.html | 98 + ...rvised-Learning-with-Graph-Embeddings.html | 86 + ...rge-scale-Heterogeneous-Text-Networks.html | 97 + ...ing-the-Knowledge-in-a-Neural-Network.html | 92 + ...-are-features-in-deep-neural-networks.html | 99 + ...and-Data-for-Image-Question-Answering.html | 77 + ...ion-with-Internal-and-External-Memory.html | 79 + .../01/29/StarSpace-Embed-All-The-Things.html | 57 + ...ation-with-Pointer-Generator-Networks.html | 72 + ...stems-Using-Recurrent-Neural-Networks.html | 48 + ...nal-Inference-for-Interacting-Systems.html | 97 + ...AT-Solver-from-Single-Bit-Supervision.html | 95 + ...ing-in-Gradient-Based-Neural-Networks.html | 80 + ...-Evidence-with-Reinforcement-Learning.html | 120 + ...ng-Rates-for-Training-Neural-Networks.html | 66 + ...hesis-Training-Pruned-Neural-Networks.html | 72 + ...pervised-Learning-By-Predicting-Noise.html | 147 + ...Message-Passing-for-Quantum-Chemistry.html | 162 + ...-Images-for-Visual-Question-Answering.html | 74 + ...ating-Learning-via-Knowledge-Transfer.html | 69 + .../06/09/Born-Again-Neural-Networks.html | 123 + .../04/Memory-Based-Parameter-Adaption.html | 146 + ...earning-Independent-Causal-Mechanisms.html | 124 + .../2018/07/19/Kronecker-Recurrent-Units.html | 156 + ...gents-for-Deep-Reinforcement-Learning.html | 82 + ...-Learning-with-Differentiable-Pooling.html | 133 + ...Deep-Learning-with-Symbolic-Knowledge.html | 155 + ...l-Language-in-Multi-Agent-Populations.html | 121 + ...ME-a-Household-Multimodal-Environment.html | 103 + ...ent-Models-Don-t-Need-To-Be-Recurrent.html | 62 + ...Learning-Hierarchical-Representations.html | 130 + ...age-Learning-With-a-Human-In-the-Loop.html | 154 + ...with-Memory-Augmented-Neural-Networks.html | 121 + ...-Optimizers-that-Scale-and-Generalize.html | 70 + ...n-Tradeoffs-for-Hyperbolic-Embeddings.html | 181 + .../12/18/Hindsight-Experience-Replay.html | 76 + ...nctions-for-Deep-Top-k-Classification.html | 117 + ...ng-Graph-Neural-Networks-with-Kernels.html | 73 + ...fficient-Lifelong-Learning-with-A-GEM.html | 164 + ...e-of-Proprioceptive-Periodic-Policies.html | 107 + .../2019/01/22/Modular-meta-learning.html | 196 + ...ning-Skills-without-a-Reward-Function.html | 104 + ...-Memory-for-Recurrent-Neural-Networks.html | 59 + ...zation-for-Knowledge-Graph-Completion.html | 134 + ...hical-Lifelong-Reinforcement-Learning.html | 164 + ...ined-Representations-to-Diverse-Tasks.html | 146 + ...-Explanation-of-Graph-Neural-Networks.html | 200 + ...-Unsupervised-Representation-Learning.html | 122 + ...ural-benchmark-for-continual-learning.html | 51 + ...le-Model-Based-Reinforcement-Learning.html | 56 + ...nough-Compositional-Data-Augmentation.html | 47 + .../01/Relational-Reinforcement-Learning.html | 121 + ...-of-Structured-Exploration-Strategies.html | 92 + ...nforcement-Learning-from-Observations.html | 72 + .../06/20/Hamiltonian-Neural-Networks.html | 79 + ...Abstract-Reasoning-in-Neural-Networks.html | 165 + ...Permutation-Invariant-Neural-Networks.html | 103 + ...eralization-in-Reinforcement-Learning.html | 128 + ...zation-in-Deep-Reinforcement-Learning.html | 141 + ...s-using-Probabilistic-Dynamics-Models.html | 81 + .../15/Abductive-Commonsense-Reasoning.html | 80 + ...Large-Memory-Layers-with-Product-Keys.html | 105 + ...-New-Benchmark-for-Physical-Reasoning.html | 164 + .../2019/09/05/How-to-train-your-MAML.html | 91 + ...tor-Learner-Architectures-for-Deep-RL.html | 62 + ...e-Learning-of-Structured-World-Models.html | 131 + ...hogi-by-Planning-with-a-Learned-Model.html | 121 + ...Purpose-of-Actions-in-Procedural-Text.html | 204 + ...-Learning-of-Language-Representations.html | 114 + ...-Theory-of-State-Abstraction-for-MDPs.html | 124 + ...Superposition-of-many-models-into-one.html | 185 + ...batch-SGD-Training-ImageNet-in-1-Hour.html | 107 + ...derstanding-the-Effectiveness-of-MAML.html | 141 + ...Overfitting-in-Reinforcement-Learning.html | 134 + ...n-in-the-Wild-Findings-and-Challenges.html | 179 + ...del,-and-You-Should-Treat-it-Like-One.html | 112 + ...lection-for-online-continual-learning.html | 122 + ...Discriminators-Rather-Than-Generators.html | 150 + ...up-Beyond-Empirical-Risk-Minimization.html | 71 + ...-Than-10,000-Image-Categories-Tell-Us.html | 44 + ...of-Independent-Deep-Generative-Models.html | 95 + ...sentations-for-Reinforcement-Learning.html | 116 + .../30/Supervised-Contrastive-Learning.html | 110 + ...Warm-Starting-Neural-Network-Training.html | 136 + ...zation-in-Deep-Reinforcement-Learning.html | 78 + ...tric-models-in-reinforcement-learning.html | 106 + ...aking-via-Local-Economic-Transactions.html | 134 + ...ider-Optima-and-Better-Generalization.html | 91 + ...Batch-Normalization-for-Meta-Learning.html | 168 + ...-Balancing-in-Deep-Multitask-Networks.html | 99 + ...dient-Surgery-for-Multi-Task-Learning.html | 113 + ...arsely-Gated-Mixture-of-Experts-Layer.html | 143 + ...-with-Composition-in-Classifier-Space.html | 88 + ...rcement-Learning-and-the-Deadly-Triad.html | 139 + ...ing-Fundamentals-of-Experience-Replay.html | 130 + ...cene-Decomposition-and-Representation.html | 91 + ...-Yield,-and-Scalable-Tolerant-Systems.html | 82 + .../A-Foliated-View-of-Transfer-Learning.html | 72 + ...ations-Reduce-Catastrophic-Forgetting.html | 63 + ...ng-Explanations-That-Are-Hard-To-Vary.html | 86 + ...xtrapolation-via-Structured-MaxEnt-RL.html | 119 + ...ces-Managing-Technical-Debt-at-Google.html | 140 + ...ent-for-Internet-Scale-Single-Sign-On.html | 127 + ...imple-Siamese-Representation-Learning.html | 107 + ...rn-Distributed-Database-System-Design.html | 123 + ...ears-later-How-the-rules-have-changed.html | 154 + ...centralized-structured-storage-system.html | 101 + ...r-container-based-distributed-systems.html | 68 + ...Compositional-Explanations-of-Neurons.html | 155 + ...with-Micro-Batch-Pipeline-Parallelism.html | 70 + ...y-based-Models-for-Continual-Learning.html | 134 + _site/site/2021/01/25/HyperNetworks.html | 83 + ...-by-Generating-Task-specific-Adapters.html | 117 + ...Continual-learning-with-hypernetworks.html | 120 + .../2021/02/15/When-Do-Curricula-Work.html | 114 + ...en-Representations-and-Task-Semantics.html | 182 + ...k-Prediction-a-View-from-the-Trenches.html | 144 + ...-Predicting-Clicks-on-Ads-at-Facebook.html | 180 + _site/site/2021/03/15/The-Tail-at-Scale.html | 184 + ...-Networks-for-YouTube-Recommendations.html | 151 + ...ptation-across-Tasks-and-Environments.html | 192 + ...els-Can-Teach-Themselves-to-Use-Tools.html | 220 + _site/site/LICENSE.md | 9 + _site/site/README.md | 134 + _site/site/archieve.html | 439 + _site/site/atom.xml | 17028 ++++++++++++++++ _site/site/index.html | 13 + _site/site/index.html.1 | 924 + .../public/apple-touch-icon-precomposed.png | Bin 0 -> 831 bytes _site/site/public/css/lanyon.css | 563 + _site/site/public/css/poole.css | 430 + _site/site/public/css/style.css | 58 + _site/site/public/css/syntax.css | 65 + _site/site/public/favicon.ico | Bin 0 -> 1150 bytes .../public/font-awesome-4.7.0/HELP-US-OUT.txt | 7 + .../font-awesome-4.7.0/css/font-awesome.css | 2337 +++ .../css/font-awesome.min.css | 4 + .../font-awesome-4.7.0/fonts/FontAwesome.otf | Bin 0 -> 134808 bytes .../fonts/fontawesome-webfont.eot | Bin 0 -> 165742 bytes .../fonts/fontawesome-webfont.svg | 2671 +++ .../fonts/fontawesome-webfont.ttf | Bin 0 -> 165548 bytes .../fonts/fontawesome-webfont.woff | Bin 0 -> 98024 bytes .../fonts/fontawesome-webfont.woff2 | Bin 0 -> 77160 bytes .../font-awesome-4.7.0/less/animated.less | 34 + .../less/bordered-pulled.less | 25 + .../public/font-awesome-4.7.0/less/core.less | 12 + .../font-awesome-4.7.0/less/fixed-width.less | 6 + .../font-awesome-4.7.0/less/font-awesome.less | 18 + .../public/font-awesome-4.7.0/less/icons.less | 789 + .../font-awesome-4.7.0/less/larger.less | 13 + .../public/font-awesome-4.7.0/less/list.less | 19 + .../font-awesome-4.7.0/less/mixins.less | 60 + .../public/font-awesome-4.7.0/less/path.less | 15 + .../less/rotated-flipped.less | 20 + .../less/screen-reader.less | 5 + .../font-awesome-4.7.0/less/stacked.less | 20 + .../font-awesome-4.7.0/less/variables.less | 800 + .../font-awesome-4.7.0/scss/font-awesome.scss | 18 + _site/site/tags.html | 5006 +++++ ...odels Can Teach Themselves to Use Tools.md | 2 +- site/_site | 2 +- 200 files changed, 47627 insertions(+), 2 deletions(-) create mode 100644 _site/404.html create mode 100755 _site/README.md create mode 100755 _site/assets/BatchNormalization/eq1.png create mode 100755 _site/assets/BatchNormalization/eq2.png create mode 100755 _site/assets/FewThingsAboutML/BiasVarianceDiagram.png create mode 100644 _site/assets/HNN/equation1.png create mode 100644 _site/assets/HNN/equation2.png create mode 100755 _site/assets/RNTN/MVRNN.png create mode 100755 _site/assets/RNTN/P1RNTN.png create mode 100755 _site/assets/RNTN/P2RNTN.png create mode 100755 _site/assets/RNTN/ParseTreeMVRNN.png create mode 100755 _site/assets/RNTN/RNN.png create mode 100755 _site/assets/RNTN/RNNModels.png create mode 100644 _site/assets/Swish/plot.png create mode 100644 _site/assets/topk/eq1.png create mode 100644 _site/assets/topk/eq2.png create mode 100644 _site/site/2017/04/27/VQA-Visual-Question-Answering.html create mode 100644 _site/site/2017/04/28/Simple-Baseline-for-Visual-Question-Answering.html create mode 100644 _site/site/2017/05/07/Conditional-Similarity-Networks.html create mode 100644 _site/site/2017/05/14/Making-the-V-in-VQA-Matter-Elevating-the-Role-of-Image-Understanding-in-Visual-Question-Answering.html create mode 100644 _site/site/2017/05/23/Neural-Module-Networks.html create mode 100644 _site/site/2017/06/03/A-Fast-and-Accurate-Dependency-Parser-using-Neural-Networks.html create mode 100644 _site/site/2017/06/17/A-Decomposable-Attention-Model-for-Natural-Language-Inference.html create mode 100644 _site/site/2017/06/26/Two-Too-Simple-Adaptations-of-Word2Vec-for-Syntax-Problems.html create mode 100644 _site/site/2017/07/01/One-Model-To-Learn-Them-All.html create mode 100644 _site/site/2017/07/09/Ask-Me-Anything-Dynamic-Memory-Networks-for-Natural-Language-Processing.html create mode 100644 _site/site/2017/07/17/Principled-Detection-of-Out-of-Distribution-Examples-in-Neural-Networks.html create mode 100644 _site/site/2017/07/24/ReasoNet-Learning-to-Stop-Reading-in-Machine-Comprehension.html create mode 100644 _site/site/2017/08/07/R-NET-Machine-Reading-Comprehension-with-Self-matching-Networks.html create mode 100644 _site/site/2017/08/21/Learning-to-Compute-Word-Embeddings-On-the-Fly.html create mode 100644 _site/site/2017/08/27/Pointer-Networks.html create mode 100644 _site/site/2017/09/22/Refining-Source-Representations-with-Relation-Networks-for-Neural-Machine-Translation.html create mode 100644 _site/site/2017/10/01/Task-Oriented-Query-Reformulation-with-Reinforcement-Learning.html create mode 100644 _site/site/2017/10/15/Reading-Wikipedia-to-Answer-Open-Domain-Questions.html create mode 100644 _site/site/2017/10/22/Swish-A-self-gated-activation-function.html create mode 100644 _site/site/2017/10/28/HARP-Hierarchical-Representation-Learning-for-Networks.html create mode 100644 _site/site/2017/11/05/Word-Representations-via-Gaussian-Embedding.html create mode 100644 _site/site/2017/11/12/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks.html create mode 100644 _site/site/2017/11/19/Higher-order-organization-of-complex-networks.html create mode 100644 _site/site/2017/11/28/Two-Stage-Synthesis-Networks-for-Transfer-Learning-in-Machine-Comprehension.html create mode 100644 _site/site/2017/12/11/Revisiting-Semi-Supervised-Learning-with-Graph-Embeddings.html create mode 100644 _site/site/2017/12/24/PTE-Predictive-Text-Embedding-through-Large-scale-Heterogeneous-Text-Networks.html create mode 100644 _site/site/2017/12/31/Distilling-the-Knowledge-in-a-Neural-Network.html create mode 100644 _site/site/2018/01/06/How-transferable-are-features-in-deep-neural-networks.html create mode 100644 _site/site/2018/01/14/Exploring-Models-and-Data-for-Image-Question-Answering.html create mode 100644 _site/site/2018/01/22/Emotional-Chatting-Machine-Emotional-Conversation-Generation-with-Internal-and-External-Memory.html create mode 100644 _site/site/2018/01/29/StarSpace-Embed-All-The-Things.html create mode 100644 _site/site/2018/02/05/Get-To-The-Point-Summarization-with-Pointer-Generator-Networks.html create mode 100644 _site/site/2018/02/11/Stylistic-Transfer-in-Natural-Language-Generation-Systems-Using-Recurrent-Neural-Networks.html create mode 100644 _site/site/2018/02/17/Neural-Relational-Inference-for-Interacting-Systems.html create mode 100644 _site/site/2018/02/24/Learning-a-SAT-Solver-from-Single-Bit-Supervision.html create mode 100644 _site/site/2018/03/05/An-Empirical-Investigation-of-Catastrophic-Forgetting-in-Gradient-Based-Neural-Networks.html create mode 100644 _site/site/2018/03/11/Improving-Information-Extraction-by-Acquiring-External-Evidence-with-Reinforcement-Learning.html create mode 100644 _site/site/2018/03/18/Cyclical-Learning-Rates-for-Training-Neural-Networks.html create mode 100644 _site/site/2018/03/25/The-Lottery-Ticket-Hypothesis-Training-Pruned-Neural-Networks.html create mode 100644 _site/site/2018/04/02/Unsupervised-Learning-By-Predicting-Noise.html create mode 100644 _site/site/2018/04/08/Neural-Message-Passing-for-Quantum-Chemistry.html create mode 100644 _site/site/2018/05/06/Learning-to-Count-Objects-in-Natural-Images-for-Visual-Question-Answering.html create mode 100644 _site/site/2018/05/21/Net2Net-Accelerating-Learning-via-Knowledge-Transfer.html create mode 100644 _site/site/2018/06/09/Born-Again-Neural-Networks.html create mode 100644 _site/site/2018/07/04/Memory-Based-Parameter-Adaption.html create mode 100644 _site/site/2018/07/11/Learning-Independent-Causal-Mechanisms.html create mode 100644 _site/site/2018/07/19/Kronecker-Recurrent-Units.html create mode 100644 _site/site/2018/08/08/Imagination-Augmented-Agents-for-Deep-Reinforcement-Learning.html create mode 100644 _site/site/2018/08/16/Hierarchical-Graph-Representation-Learning-with-Differentiable-Pooling.html create mode 100644 _site/site/2018/08/21/A-Semantic-Loss-Function-for-Deep-Learning-with-Symbolic-Knowledge.html create mode 100644 _site/site/2018/09/12/Emergence-of-Grounded-Compositional-Language-in-Multi-Agent-Populations.html create mode 100644 _site/site/2018/09/27/HoME-a-Household-Multimodal-Environment.html create mode 100644 _site/site/2018/10/04/When-Recurrent-Models-Don-t-Need-To-Be-Recurrent.html create mode 100644 _site/site/2018/10/11/Poincare-Embeddings-for-Learning-Hierarchical-Representations.html create mode 100644 _site/site/2018/10/18/BabyAI-First-Steps-Towards-Grounded-Language-Learning-With-a-Human-In-the-Loop.html create mode 100644 _site/site/2018/10/25/One-shot-Learning-with-Memory-Augmented-Neural-Networks.html create mode 100644 _site/site/2018/11/01/Learned-Optimizers-that-Scale-and-Generalize.html create mode 100644 _site/site/2018/12/11/Representation-Tradeoffs-for-Hyperbolic-Embeddings.html create mode 100644 _site/site/2018/12/18/Hindsight-Experience-Replay.html create mode 100644 _site/site/2018/12/25/Smooth-Loss-Functions-for-Deep-Top-k-Classification.html create mode 100644 _site/site/2019/01/02/Pre-training-Graph-Neural-Networks-with-Kernels.html create mode 100644 _site/site/2019/01/08/Efficient-Lifelong-Learning-with-A-GEM.html create mode 100644 _site/site/2019/01/15/Hierarchical-RL-Using-an-Ensemble-of-Proprioceptive-Periodic-Policies.html create mode 100644 _site/site/2019/01/22/Modular-meta-learning.html create mode 100644 _site/site/2019/01/29/Diversity-is-All-You-Need-Learning-Skills-without-a-Reward-Function.html create mode 100644 _site/site/2019/02/05/Linguistic-Knowledge-as-Memory-for-Recurrent-Neural-Networks.html create mode 100644 _site/site/2019/02/19/TuckER-Tensor-Factorization-for-Knowledge-Graph-Completion.html create mode 100644 _site/site/2019/03/12/Model-Primitive-Hierarchical-Lifelong-Reinforcement-Learning.html create mode 100644 _site/site/2019/03/16/To-Tune-or-Not-to-Tune-Adapting-Pretrained-Representations-to-Diverse-Tasks.html create mode 100644 _site/site/2019/03/26/GNN-Explainer-A-Tool-for-Post-hoc-Explanation-of-Graph-Neural-Networks.html create mode 100644 _site/site/2019/04/02/Meta-Learning-Update-Rules-for-Unsupervised-Representation-Learning.html create mode 100644 _site/site/2019/04/09/Towards-a-natural-benchmark-for-continual-learning.html create mode 100644 _site/site/2019/05/14/Multiple-Model-Based-Reinforcement-Learning.html create mode 100644 _site/site/2019/05/21/Good-Enough-Compositional-Data-Augmentation.html create mode 100644 _site/site/2019/06/01/Relational-Reinforcement-Learning.html create mode 100644 _site/site/2019/06/08/Meta-Reinforcement-Learning-of-Structured-Exploration-Strategies.html create mode 100644 _site/site/2019/06/13/Extrapolating-Beyond-Suboptimal-Demonstrations-via-Inverse-Reinforcement-Learning-from-Observations.html create mode 100644 _site/site/2019/06/20/Hamiltonian-Neural-Networks.html create mode 100644 _site/site/2019/06/27/Measuring-Abstract-Reasoning-in-Neural-Networks.html create mode 100644 _site/site/2019/07/18/Set-Transformer-A-Framework-for-Attention-based-Permutation-Invariant-Neural-Networks.html create mode 100644 _site/site/2019/07/25/Quantifying-Generalization-in-Reinforcement-Learning.html create mode 100644 _site/site/2019/08/01/Assessing-Generalization-in-Deep-Reinforcement-Learning.html create mode 100644 _site/site/2019/08/08/Deep-Reinforcement-Learning-in-a-Handful-of-Trials-using-Probabilistic-Dynamics-Models.html create mode 100644 _site/site/2019/08/15/Abductive-Commonsense-Reasoning.html create mode 100644 _site/site/2019/08/22/Large-Memory-Layers-with-Product-Keys.html create mode 100644 _site/site/2019/08/29/PHYRE-A-New-Benchmark-for-Physical-Reasoning.html create mode 100644 _site/site/2019/09/05/How-to-train-your-MAML.html create mode 100644 _site/site/2019/09/12/Gossip-based-Actor-Learner-Architectures-for-Deep-RL.html create mode 100644 _site/site/2019/11/28/Contrastive-Learning-of-Structured-World-Models.html create mode 100644 _site/site/2019/12/05/Mastering-Atari,-Go,-Chess-and-Shogi-by-Planning-with-a-Learned-Model.html create mode 100644 _site/site/2019/12/12/Everything-Happens-for-a-Reason-Discovering-the-Purpose-of-Actions-in-Procedural-Text.html create mode 100644 _site/site/2019/12/19/ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations.html create mode 100644 _site/site/2019/12/26/Towards-a-Unified-Theory-of-State-Abstraction-for-MDPs.html create mode 100644 _site/site/2020/01/02/Superposition-of-many-models-into-one.html create mode 100644 _site/site/2020/01/09/Accurate-Large-Minibatch-SGD-Training-ImageNet-in-1-Hour.html create mode 100644 _site/site/2020/01/16/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML.html create mode 100644 _site/site/2020/01/23/Observational-Overfitting-in-Reinforcement-Learning.html create mode 100644 _site/site/2020/01/30/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges.html create mode 100644 _site/site/2020/02/06/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One.html create mode 100644 _site/site/2020/02/13/Gradient-based-sample-selection-for-online-continual-learning.html create mode 100644 _site/site/2020/02/20/ELECTRA-Pre-training-Text-Encoders-as-Discriminators-Rather-Than-Generators.html create mode 100644 _site/site/2020/02/27/mixup-Beyond-Empirical-Risk-Minimization.html create mode 100644 _site/site/2020/03/05/What-Does-Classifying-More-Than-10,000-Image-Categories-Tell-Us.html create mode 100644 _site/site/2020/03/12/Competitive-Training-of-Mixtures-of-Independent-Deep-Generative-Models.html create mode 100644 _site/site/2020/04/09/CURL-Contrastive-Unsupervised-Representations-for-Reinforcement-Learning.html create mode 100644 _site/site/2020/04/30/Supervised-Contrastive-Learning.html create mode 100644 _site/site/2020/06/18/On-the-Difficulty-of-Warm-Starting-Neural-Network-Training.html create mode 100644 _site/site/2020/06/25/Network-Randomization-A-Simple-Technique-for-Generalization-in-Deep-Reinforcement-Learning.html create mode 100644 _site/site/2020/07/02/When-to-use-parametric-models-in-reinforcement-learning.html create mode 100644 _site/site/2020/07/09/Decentralized-Reinforcement-Learning-Global-Decision-Making-via-Local-Economic-Transactions.html create mode 100644 _site/site/2020/07/16/Averaging-Weights-leads-to-Wider-Optima-and-Better-Generalization.html create mode 100644 _site/site/2020/07/23/TASKNORM-Rethinking-Batch-Normalization-for-Meta-Learning.html create mode 100644 _site/site/2020/07/30/GradNorm-Gradient-Normalization-for-Adaptive-Loss-Balancing-in-Deep-Multitask-Networks.html create mode 100644 _site/site/2020/08/06/Gradient-Surgery-for-Multi-Task-Learning.html create mode 100644 _site/site/2020/08/14/Outrageously-Large-Neural-Networks-The-Sparsely-Gated-Mixture-of-Experts-Layer.html create mode 100644 _site/site/2020/08/24/Alpha-Net-Adaptation-with-Composition-in-Classifier-Space.html create mode 100644 _site/site/2020/08/31/Deep-Reinforcement-Learning-and-the-Deadly-Triad.html create mode 100644 _site/site/2020/09/07/Revisiting-Fundamentals-of-Experience-Replay.html create mode 100644 _site/site/2020/09/14/MONet-Unsupervised-Scene-Decomposition-and-Representation.html create mode 100644 _site/site/2020/09/21/Harvest,-Yield,-and-Scalable-Tolerant-Systems.html create mode 100644 _site/site/2020/09/28/A-Foliated-View-of-Transfer-Learning.html create mode 100644 _site/site/2020/10/12/Remembering-for-the-Right-Reasons-Explanations-Reduce-Catastrophic-Forgetting.html create mode 100644 _site/site/2020/10/19/Learning-Explanations-That-Are-Hard-To-Vary.html create mode 100644 _site/site/2020/11/02/One-Solution-is-Not-All-You-Need-Few-Shot-Extrapolation-via-Structured-MaxEnt-RL.html create mode 100644 _site/site/2020/11/09/Searching-for-Build-Debt-Experiences-Managing-Technical-Debt-at-Google.html create mode 100644 _site/site/2020/11/16/Data-Management-for-Internet-Scale-Single-Sign-On.html create mode 100644 _site/site/2020/11/23/Exploring-Simple-Siamese-Representation-Learning.html create mode 100644 _site/site/2020/11/30/Consistency-Tradeoffs-in-Modern-Distributed-Database-System-Design.html create mode 100644 _site/site/2020/12/07/CAP-twelve-years-later-How-the-rules-have-changed.html create mode 100644 _site/site/2020/12/14/Cassandra-a-decentralized-structured-storage-system.html create mode 100644 _site/site/2020/12/21/Design-patterns-for-container-based-distributed-systems.html create mode 100644 _site/site/2021/01/04/Compositional-Explanations-of-Neurons.html create mode 100644 _site/site/2021/01/11/GPipe-Easy-Scaling-with-Micro-Batch-Pipeline-Parallelism.html create mode 100644 _site/site/2021/01/18/Energy-based-Models-for-Continual-Learning.html create mode 100644 _site/site/2021/01/25/HyperNetworks.html create mode 100644 _site/site/2021/02/01/Zero-shot-Learning-by-Generating-Task-specific-Adapters.html create mode 100644 _site/site/2021/02/08/Continual-learning-with-hypernetworks.html create mode 100644 _site/site/2021/02/15/When-Do-Curricula-Work.html create mode 100644 _site/site/2021/02/22/Anatomy-of-Catastrophic-Forgetting-Hidden-Representations-and-Task-Semantics.html create mode 100644 _site/site/2021/03/01/Ad-Click-Prediction-a-View-from-the-Trenches.html create mode 100644 _site/site/2021/03/08/Practical-Lessons-from-Predicting-Clicks-on-Ads-at-Facebook.html create mode 100644 _site/site/2021/03/15/The-Tail-at-Scale.html create mode 100644 _site/site/2021/03/22/Deep-Neural-Networks-for-YouTube-Recommendations.html create mode 100644 _site/site/2021/03/29/Synthesized-Policies-for-Transfer-and-Adaptation-across-Tasks-and-Environments.html create mode 100644 _site/site/2023/02/10/Toolformer-Language-Models-Can-Teach-Themselves-to-Use-Tools.html create mode 100755 _site/site/LICENSE.md create mode 100755 _site/site/README.md create mode 100644 _site/site/archieve.html create mode 100644 _site/site/atom.xml create mode 100644 _site/site/index.html create mode 100755 _site/site/index.html.1 create mode 100755 _site/site/public/apple-touch-icon-precomposed.png create mode 100755 _site/site/public/css/lanyon.css create mode 100755 _site/site/public/css/poole.css create mode 100755 _site/site/public/css/style.css create mode 100755 _site/site/public/css/syntax.css create mode 100755 _site/site/public/favicon.ico create mode 100755 _site/site/public/font-awesome-4.7.0/HELP-US-OUT.txt create mode 100755 _site/site/public/font-awesome-4.7.0/css/font-awesome.css create mode 100755 _site/site/public/font-awesome-4.7.0/css/font-awesome.min.css create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/FontAwesome.otf create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.eot create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.svg create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.ttf create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.woff create mode 100755 _site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.woff2 create mode 100755 _site/site/public/font-awesome-4.7.0/less/animated.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/bordered-pulled.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/core.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/fixed-width.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/font-awesome.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/icons.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/larger.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/list.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/mixins.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/path.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/rotated-flipped.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/screen-reader.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/stacked.less create mode 100755 _site/site/public/font-awesome-4.7.0/less/variables.less create mode 100755 _site/site/public/font-awesome-4.7.0/scss/font-awesome.scss create mode 100644 _site/site/tags.html diff --git a/_site/404.html b/_site/404.html new file mode 100644 index 00000000..b7c3474c --- /dev/null +++ b/_site/404.html @@ -0,0 +1,4 @@ +
+

404: Page not found

+

Sorry, we've misplaced that URL or it's pointing to something that doesn't exist. Head back home to try finding it again.

+
diff --git a/_site/README.md b/_site/README.md new file mode 100755 index 00000000..d8f5e39c --- /dev/null +++ b/_site/README.md @@ -0,0 +1,227 @@ +# papers-I-read + +I am trying a new initiative - a-paper-a-week. This repository will hold all those papers and related summaries and notes. + +## List of papers + +- [Toolformer - Language Models Can Teach Themselves to Use Tools](https://shagunsodhani.com/papers-I-read/Toolformer-Language-Models-Can-Teach-Themselves-to-Use-Tools) +- [Hints for Computer System Design](https://shagunsodhani.com/papers-I-read/Hints-for-Computer-System-Design) +- [Synthesized Policies for Transfer and Adaptation across Tasks and Environments](https://shagunsodhani.com/papers-I-read/Synthesized-Policies-for-Transfer-and-Adaptation-across-Tasks-and-Environments) +- [Deep Neural Networks for YouTube Recommendations](https://shagunsodhani.com/papers-I-read/Deep-Neural-Networks-for-YouTube-Recommendations) +- [The Tail at Scale](https://shagunsodhani.com/papers-I-read/The-Tail-at-Scale) +- [Practical Lessons from Predicting Clicks on Ads at Facebook](https://shagunsodhani.com/papers-I-read/Practical-Lessons-from-Predicting-Clicks-on-Ads-at-Facebook) +- [Ad Click Prediction - a View from the Trenches](https://shagunsodhani.com/papers-I-read/Ad-Click-Prediction-a-View-from-the-Trenches) +- [Anatomy of Catastrophic Forgetting - Hidden Representations and Task Semantics](https://shagunsodhani.com/papers-I-read/Anatomy-of-Catastrophic-Forgetting-Hidden-Representations-and-Task-Semantics) +- [When Do Curricula Work?](https://shagunsodhani.com/papers-I-read/When-Do-Curricula-Work) +- [Continual learning with hypernetworks](https://shagunsodhani.com/papers-I-read/Continual-learning-with-hypernetworks) +- [Zero-shot Learning by Generating Task-specific Adapters](https://shagunsodhani.com/papers-I-read/Zero-shot-Learning-by-Generating-Task-specific-Adapters) +- [HyperNetworks](https://shagunsodhani.com/papers-I-read/HyperNetworks) +- [Energy-based Models for Continual Learning](https://shagunsodhani.com/papers-I-read/Energy-based-Models-for-Continual-Learning) +- [GPipe - Easy Scaling with Micro-Batch Pipeline Parallelism](https://shagunsodhani.com/papers-I-read/GPipe-Easy-Scaling-with-Micro-Batch-Pipeline-Parallelism) +- [Compositional Explanations of Neurons](https://shagunsodhani.com/papers-I-read/Compositional-Explanations-of-Neurons) +- [Design patterns for container-based distributed systems](https://shagunsodhani.com/papers-I-read/Design-patterns-for-container-based-distributed-systems) +- [Cassandra - a decentralized structured storage system](https://shagunsodhani.com/papers-I-read/Cassandra-a-decentralized-structured-storage-system) +- [CAP twelve years later - How the rules have changed](https://shagunsodhani.com/papers-I-read/CAP-twelve-years-later-How-the-rules-have-changed) +- [Consistency Tradeoffs in Modern Distributed Database System Design](https://shagunsodhani.com/papers-I-read/Consistency-Tradeoffs-in-Modern-Distributed-Database-System-Design) +- [Exploring Simple Siamese Representation Learning](https://shagunsodhani.com/papers-I-read/Exploring-Simple-Siamese-Representation-Learning) +- [Data Management for Internet-Scale Single-Sign-On](https://shagunsodhani.com/papers-I-read/Data-Management-for-Internet-Scale-Single-Sign-On) +- [Searching for Build Debt - Experiences Managing Technical Debt at Google](https://shagunsodhani.com/papers-I-read/Searching-for-Build-Debt-Experiences-Managing-Technical-Debt-at-Google) +- [One Solution is Not All You Need - Few-Shot Extrapolation via Structured MaxEnt RL](https://shagunsodhani.com/papers-I-read/One-Solution-is-Not-All-You-Need-Few-Shot-Extrapolation-via-Structured-MaxEnt-RL) +- [Learning Explanations That Are Hard To Vary](https://shagunsodhani.com/papers-I-read/Learning-Explanations-That-Are-Hard-To-Vary) +- [Remembering for the Right Reasons - Explanations Reduce Catastrophic Forgetting](https://shagunsodhani.com/papers-I-read/Remembering-for-the-Right-Reasons-Explanations-Reduce-Catastrophic-Forgetting) +- [A Foliated View of Transfer Learning](https://shagunsodhani.com/papers-I-read/A-Foliated-View-of-Transfer-Learning) +- [Harvest, Yield, and Scalable Tolerant Systems](https://shagunsodhani.com/papers-I-read/Harvest,-Yield,-and-Scalable-Tolerant-Systems) +- [MONet - Unsupervised Scene Decomposition and Representation](https://shagunsodhani.com/papers-I-read/MONet-Unsupervised-Scene-Decomposition-and-Representation) +- [Revisiting Fundamentals of Experience Replay](https://shagunsodhani.com/papers-I-read/Revisiting-Fundamentals-of-Experience-Replay) +- [Deep Reinforcement Learning and the Deadly Triad](https://shagunsodhani.com/papers-I-read/Deep-Reinforcement-Learning-and-the-Deadly-Triad) +- [Alpha Net: Adaptation with Composition in Classifier Space](https://shagunsodhani.com/papers-I-read/Alpha-Net-Adaptation-with-Composition-in-Classifier-Space) +- [Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer](https://shagunsodhani.com/papers-I-read/Outrageously-Large-Neural-Networks-The-Sparsely-Gated-Mixture-of-Experts-Layer) +- [Gradient Surgery for Multi-Task Learning](https://shagunsodhani.com/papers-I-read/Gradient-Surgery-for-Multi-Task-Learning) +- [GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks](https://shagunsodhani.com/papers-I-read/GradNorm-Gradient-Normalization-for-Adaptive-Loss-Balancing-in-Deep-Multitask-Networks) +- [TaskNorm: Rethinking Batch Normalization for Meta-Learning](https://shagunsodhani.com/papers-I-read/TASKNORM-Rethinking-Batch-Normalization-for-Meta-Learning) +- [Averaging Weights leads to Wider Optima and Better Generalization](https://shagunsodhani.com/papers-I-read/Averaging-Weights-leads-to-Wider-Optima-and-Better-Generalization) +- [Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions](https://shagunsodhani.com/papers-I-read/Decentralized-Reinforcement-Learning-Global-Decision-Making-via-Local-Economic-Transactions) +- [When to use parametric models in reinforcement learning?](https://shagunsodhani.com/papers-I-read/When-to-use-parametric-models-in-reinforcement-learning) +- [Network Randomization - A Simple Technique for Generalization in Deep Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Network-Randomization-A-Simple-Technique-for-Generalization-in-Deep-Reinforcement-Learning) +- [On the Difficulty of Warm-Starting Neural Network Training](https://shagunsodhani.com/papers-I-read/On-the-Difficulty-of-Warm-Starting-Neural-Network-Training) +- [Supervised Contrastive Learning](https://shagunsodhani.com/papers-I-read/Supervised-Contrastive-Learning) +- [CURL - Contrastive Unsupervised Representations for Reinforcement Learning](https://shagunsodhani.com/papers-I-read/CURL-Contrastive-Unsupervised-Representations-for-Reinforcement-Learning) +- [Competitive Training of Mixtures of Independent Deep Generative Models](https://shagunsodhani.com/papers-I-read/Competitive-Training-of-Mixtures-of-Independent-Deep-Generative-Models) +- [What Does Classifying More Than 10,000 Image Categories Tell Us?](https://shagunsodhani.com/papers-I-read/What-Does-Classifying-More-Than-10,000-Image-Categories-Tell-Us) +- [mixup - Beyond Empirical Risk Minimization](https://shagunsodhani.com/papers-I-read/mixup-Beyond-Empirical-Risk-Minimization) +- [ELECTRA - Pre-training Text Encoders as Discriminators Rather Than Generators](https://shagunsodhani.com/papers-I-read/ELECTRA-Pre-training-Text-Encoders-as-Discriminators-Rather-Than-Generators) +- [Gradient based sample selection for online continual learning](https://shagunsodhani.com/papers-I-read/Gradient-based-sample-selection-for-online-continual-learning) +- [Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One](https://shagunsodhani.com/papers-I-read/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One) +- [Massively Multilingual Neural Machine Translation in the Wild - Findings and Challenges](https://shagunsodhani.com/papers-I-read/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges) +- [Observational Overfitting in Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Observational-Overfitting-in-Reinforcement-Learning) +- [Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML](https://shagunsodhani.com/papers-I-read/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML) +- [Accurate, Large Minibatch SGD - Training ImageNet in 1 Hour](https://shagunsodhani.com/papers-I-read/Accurate-Large-Minibatch-SGD-Training-ImageNet-in-1-Hour) +- [Superposition of many models into one](https://shagunsodhani.com/papers-I-read/Superposition-of-many-models-into-one) +- [Towards a Unified Theory of State Abstraction for MDPs](https://shagunsodhani.com/papers-I-read/Towards-a-Unified-Theory-of-State-Abstraction-for-MDPs) +- [ALBERT - A Lite BERT for Self-supervised Learning of Language Representations](https://shagunsodhani.com/papers-I-read/ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations) +- [Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model](https://shagunsodhani.com/papers-I-read/Mastering-Atari,-Go,-Chess-and-Shogi-by-Planning-with-a-Learned-Model) +- [Contrastive Learning of Structured World Models](https://shagunsodhani.com/papers-I-read/Contrastive-Learning-of-Structured-World-Models) +- [Gossip based Actor-Learner Architectures for Deep RL](https://shagunsodhani.com/papers-I-read/Gossip-based-Actor-Learner-Architectures-for-Deep-RL) +- [How to train your MAML](https://shagunsodhani.com/papers-I-read/How-to-train-your-MAML) +- [PHYRE - A New Benchmark for Physical Reasoning](https://shagunsodhani.com/papers-I-read/PHYRE-A-New-Benchmark-for-Physical-Reasoning) +- [Large Memory Layers with Product Keys](https://shagunsodhani.com/papers-I-read/Large-Memory-Layers-with-Product-Keys) +- [Abductive Commonsense Reasoning](https://shagunsodhani.com/papers-I-read/Abductive-Commonsense-Reasoning) +- [Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models](https://shagunsodhani.com/papers-I-read/Deep-Reinforcement-Learning-in-a-Handful-of-Trials-using-Probabilistic-Dynamics-Models) +- [Assessing Generalization in Deep Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Assessing-Generalization-in-Deep-Reinforcement-Learning) +- [Quantifying Generalization in Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Quantifying-Generalization-in-Reinforcement-Learning) +- [Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks](https://shagunsodhani.com/papers-I-read/Set-Transformer-A-Framework-for-Attention-based-Permutation-Invariant-Neural-Networks) +- [Measuring abstract reasoning in neural networks](https://shagunsodhani.com/papers-I-read/Measuring-Abstract-Reasoning-in-Neural-Networks) +- [Hamiltonian Neural Networks](https://shagunsodhani.com/papers-I-read/Hamiltonian-Neural-Networks) +- [Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations](https://shagunsodhani.com/papers-I-read/Extrapolating-Beyond-Suboptimal-Demonstrations-via-Inverse-Reinforcement-Learning-from-Observations) +- [Meta-Reinforcement Learning of Structured Exploration Strategies](https://shagunsodhani.com/papers-I-read/Meta-Reinforcement-Learning-of-Structured-Exploration-Strategies) +- [Relational Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Relational-Reinforcement-Learning) +- [Good-Enough Compositional Data Augmentation](https://shagunsodhani.com/papers-I-read/Good-Enough-Compositional-Data-Augmentation) +- [Multiple Model-Based Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Multiple-Model-Based-Reinforcement-Learning) +- [Towards a natural benchmark for continual learning](https://shagunsodhani.com/papers-I-read/Towards-a-natural-benchmark-for-continual-learning) +- [Meta-Learning Update Rules for Unsupervised Representation Learning](https://shagunsodhani.com/papers-I-read/Meta-Learning-Update-Rules-for-Unsupervised-Representation-Learning) +- [GNN Explainer - A Tool for Post-hoc Explanation of Graph Neural Networks](https://shagunsodhani.com/papers-I-read/GNN-Explainer-A-Tool-for-Post-hoc-Explanation-of-Graph-Neural-Networks) +- [To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks](https://shagunsodhani.com/papers-I-read/To-Tune-or-Not-to-Tune-Adapting-Pretrained-Representations-to-Diverse-Tasks) +- [Model Primitive Hierarchical Lifelong Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Model-Primitive-Hierarchical-Lifelong-Reinforcement-Learning) +- [TuckER - Tensor Factorization for Knowledge Graph Completion](https://shagunsodhani.com/papers-I-read/TuckER-Tensor-Factorization-for-Knowledge-Graph-Completion) +- [Linguistic Knowledge as Memory for Recurrent Neural Networks](https://shagunsodhani.com/papers-I-read/Linguistic-Knowledge-as-Memory-for-Recurrent-Neural-Networks) +- [Diversity is All You Need - Learning Skills without a Reward Function](https://shagunsodhani.com/papers-I-read/Diversity-is-All-You-Need-Learning-Skills-without-a-Reward-Function) +- [Modular meta-learning](https://shagunsodhani.com/papers-I-read/Modular-meta-learning) +- [Hierarchical RL Using an Ensemble of Proprioceptive Periodic Policies](https://shagunsodhani.com/papers-I-read/Hierarchical-RL-Using-an-Ensemble-of-Proprioceptive-Periodic-Policies) +- [Efficient Lifelong Learningi with A-GEM](https://shagunsodhani.com/papers-I-read/Efficient-Lifelong-Learning-with-A-GEM) +- [Pre-training Graph Neural Networks with Kernels](https://shagunsodhani.com/papers-I-read/Pre-training-Graph-Neural-Networks-with-Kernels) +- [Smooth Loss Functions for Deep Top-k Classification](https://shagunsodhani.com/papers-I-read/Smooth-Loss-Functions-for-Deep-Top-k-Classification) +- [Hindsight Experience Replay](https://shagunsodhani.com/papers-I-read/Hindsight-Experience-Replay) +- [Representation Tradeoffs for Hyperbolic Embeddings](https://shagunsodhani.com/papers-I-read/Representation-Tradeoffs-for-Hyperbolic-Embeddings) +- [Learned Optimizers that Scale and Generalize](https://shagunsodhani.com/papers-I-read/Learned-Optimizers-that-Scale-and-Generalize) +- [One-shot Learning with Memory-Augmented Neural Networks](https://shagunsodhani.com/papers-I-read/One-shot-Learning-with-Memory-Augmented-Neural-Networks) +- [BabyAI - First Steps Towards Grounded Language Learning With a Human In the Loop](https://shagunsodhani.com/papers-I-read/BabyAI-First-Steps-Towards-Grounded-Language-Learning-With-a-Human-In-the-Loop) +- [Poincaré Embeddings for Learning Hierarchical Representations](https://shagunsodhani.com/papers-I-read/Poincare-Embeddings-for-Learning-Hierarchical-Representations) +- [When Recurrent Models Don’t Need To Be Recurrent](https://shagunsodhani.com/papers-I-read/When-Recurrent-Models-Don-t-Need-To-Be-Recurrent) +- [HoME - a Household Multimodal Environment](https://shagunsodhani.com/papers-I-read/HoME-a-Household-Multimodal-Environment) +- [Emergence of Grounded Compositional Language in Multi-Agent Populations](https://shagunsodhani.com/papers-I-read/Emergence-of-Grounded-Compositional-Language-in-Multi-Agent-Populations) +- [A Semantic Loss Function for Deep Learning with Symbolic Knowledge](https://shagunsodhani.com/papers-I-read/A-Semantic-Loss-Function-for-Deep-Learning-with-Symbolic-Knowledge) +- [Hierarchical Graph Representation Learning with Differentiable Pooling](https://shagunsodhani.com/papers-I-read/Hierarchical-Graph-Representation-Learning-with-Differentiable-Pooling) +- [Imagination-Augmented Agents for Deep Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Imagination-Augmented-Agents-for-Deep-Reinforcement-Learning) +- [Kronecker Recurrent Units](https://shagunsodhani.com/papers-I-read/Kronecker-Recurrent-Units) +- [Learning Independent Causal Mechanisms](https://shagunsodhani.com/papers-I-read/Learning-Independent-Causal-Mechanisms) +- [Memory-based Parameter Adaptation](https://shagunsodhani.com/papers-I-read/Memory-Based-Parameter-Adaption) +- [Born Again Neural Networks](https://shagunsodhani.com/papers-I-read/Born-Again-Neural-Networks) +- [Net2Net-Accelerating Learning via Knowledge Transfer](https://shagunsodhani.com/papers-I-read/Net2Net-Accelerating-Learning-via-Knowledge-Transfer) +- [Learning to Count Objects in Natural Images for Visual Question Answering](https://shagunsodhani.com/papers-I-read/Learning-to-Count-Objects-in-Natural-Images-for-Visual-Question-Answering) +- [Neural Message Passing for Quantum Chemistry](https://shagunsodhani.com/papers-I-read/Neural-Message-Passing-for-Quantum-Chemistry) +- [Unsupervised Learning by Predicting Noise](https://shagunsodhani.com/papers-I-read/Unsupervised-Learning-By-Predicting-Noise) +- [The Lottery Ticket Hypothesis - Training Pruned Neural Networks](https://shagunsodhani.com/papers-I-read/The-Lottery-Ticket-Hypothesis-Training-Pruned-Neural-Networks) +- [Cyclical Learning Rates for Training Neural Networks](https://shagunsodhani.com/papers-I-read/Cyclical-Learning-Rates-for-Training-Neural-Networks) +- [Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Improving-Information-Extraction-by-Acquiring-External-Evidence-with-Reinforcement-Learning) +- [An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks](https://shagunsodhani.com/papers-I-read/An-Empirical-Investigation-of-Catastrophic-Forgetting-in-Gradient-Based-Neural-Networks) +- [Learning an SAT Solver from Single-Bit Supervision](https://shagunsodhani.com/papers-I-read/Learning-a-SAT-Solver-from-Single-Bit-Supervision) +- [Neural Relational Inference for Interacting Systems](https://shagunsodhani.com/papers-I-read/Neural-Relational-Inference-for-Interacting-Systems) +- [Stylistic Transfer in Natural Language Generation Systems Using Recurrent Neural Networks](https://shagunsodhani.com/papers-I-read/Stylistic-Transfer-in-Natural-Language-Generation-Systems-Using-Recurrent-Neural-Networks) +- [Get To The Point: Summarization with Pointer-Generator Networks](https://shagunsodhani.com/papers-I-read/Get-To-The-Point-Summarization-with-Pointer-Generator-Networks) +- [StarSpace - Embed All The Things!](https://shagunsodhani.com/papers-I-read/StarSpace-Embed-All-The-Things) +- [Emotional Chatting Machine - Emotional Conversation Generation with Internal and External Memory](https://shagunsodhani.com/papers-I-read/Emotional-Chatting-Machine-Emotional-Conversation-Generation-with-Internal-and-External-Memory) +- [Exploring Models and Data for Image Question Answering](https://shagunsodhani.com/papers-I-read/Exploring-Models-and-Data-for-Image-Question-Answering) +- [How transferable are features in deep neural networks](https://shagunsodhani.com/papers-I-read/How-transferable-are-features-in-deep-neural-networks) +- [Distilling the Knowledge in a Neural Network](https://shagunsodhani.com/papers-I-read/Distilling-the-Knowledge-in-a-Neural-Network) +- [Revisiting Semi-Supervised Learning with Graph Embeddings](https://shagunsodhani.com/papers-I-read/Revisiting-Semi-Supervised-Learning-with-Graph-Embeddings) +- [Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension](https://shagunsodhani.com/papers-I-read/Two-Stage-Synthesis-Networks-for-Transfer-Learning-in-Machine-Comprehension) +- [Higher-order organization of complex networks](https://shagunsodhani.com/papers-I-read/Higher-order-organization-of-complex-networks) +- [Network Motifs - Simple Building Blocks of Complex Networks](https://shagunsodhani.com/papers-I-read/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks) +- [Word Representations via Gaussian Embedding](https://shagunsodhani.com/papers-I-read/Word-Representations-via-Gaussian-Embedding) +- [HARP - Hierarchical Representation Learning for Networks](https://shagunsodhani.com/papers-I-read/HARP-Hierarchical-Representation-Learning-for-Networks) +- [Swish - a Self-Gated Activation Function](https://shagunsodhani.com/papers-I-read/Swish-A-self-gated-activation-function) +- [Reading Wikipedia to Answer Open-Domain Questions](https://shagunsodhani.com/papers-I-read/Reading-Wikipedia-to-Answer-Open-Domain-Questions) +- [Task-Oriented Query Reformulation with Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Task-Oriented-Query-Reformulation-with-Reinforcement-Learning) +- [Refining Source Representations with Relation Networks for Neural Machine Translation](https://shagunsodhani.com/papers-I-read/Refining-Source-Representations-with-Relation-Networks-for-Neural-Machine-Translation) +- [Pointer Networks](https://shagunsodhani.com/papers-I-read/Pointer-Networks) +- [Learning to Compute Word Embeddings On the Fly](https://shagunsodhani.com/papers-I-read/Learning-to-Compute-Word-Embeddings-On-the-Fly) +- [R-NET - Machine Reading Comprehension with Self-matching Networks](https://shagunsodhani.com/papers-I-read/R-NET-Machine-Reading-Comprehension-with-Self-matching-Networks) +- [ReasoNet - Learning to Stop Reading in Machine Comprehension](https://shagunsodhani.com/papers-I-read/ReasoNet-Learning-to-Stop-Reading-in-Machine-Comprehension) +- [Principled Detection of Out-of-Distribution Examples in Neural Networks](https://shagunsodhani.com/papers-I-read/Principled-Detection-of-Out-of-Distribution-Examples-in-Neural-Networks) +- [Ask Me Anything: Dynamic Memory Networks for Natural Language Processing](https://shagunsodhani.com/papers-I-read/Ask-Me-Anything-Dynamic-Memory-Networks-for-Natural-Language-Processing) +- [One Model To Learn Them All](https://shagunsodhani.com/papers-I-read/One-Model-To-Learn-Them-All) +- [Two/Too Simple Adaptations of Word2Vec for Syntax Problems](https://shagunsodhani.com/papers-I-read/Two-Too-Simple-Adaptations-of-Word2Vec-for-Syntax-Problems) +- [A Decomposable Attention Model for Natural Language Inference](https://shagunsodhani.com/papers-I-read/A-Decomposable-Attention-Model-for-Natural-Language-Inference) +- [A Fast and Accurate Dependency Parser using Neural Networks](https://shagunsodhani.com/papers-I-read/A-Fast-and-Accurate-Dependency-Parser-using-Neural-Networks) +- [Neural Module Networks](https://shagunsodhani.com/papers-I-read/Neural-Module-Networks) +- [Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering](https://shagunsodhani.com/papers-I-read/Making-the-V-in-VQA-Matter-Elevating-the-Role-of-Image-Understanding-in-Visual-Question-Answering) +- [Conditional Similarity Networks](https://shagunsodhani.com/papers-I-read/Conditional-Similarity-Networks) +- [Simple Baseline for Visual Question Answering](https://shagunsodhani.com/papers-I-read/Simple-Baseline-for-Visual-Question-Answering) +- [VQA: Visual Question Answering](https://shagunsodhani.com/papers-I-read/VQA-Visual-Question-Answering) +- [Learning to Generate Reviews and Discovering Sentiment](https://gist.github.com/shagunsodhani/634dbe1aa678188399254bb3d0078e1d) +- [Seeing the Arrow of Time](https://gist.github.com/shagunsodhani/828d8de0034a350d97738bbedadc9373) +- [End-to-end optimization of goal-driven and visually grounded dialogue systems](https://gist.github.com/shagunsodhani/bbbc739e6815ab6217e0cf0a8f706786) +- [GuessWhat?! Visual object discovery through multi-modal dialogue](https://gist.github.com/shagunsodhani/2418238e6aefd7b1e8c922cda9e10488) +- [Semantic Parsing via Paraphrasing](https://gist.github.com/shagunsodhani/93c96d7dd0488d0d00bd7078889dd6f6) +- [Traversing Knowledge Graphs in Vector Space](https://gist.github.com/shagunsodhani/e8e6213906ec2642f27b1aca3a6201c6) +- [PPDB: The Paraphrase Database](https://gist.github.com/shagunsodhani/fa1f387f084355dfafdf7550b1899af6) +- [NewsQA: A Machine Comprehension Dataset](https://gist.github.com/shagunsodhani/c47f0d5c1dfe60ce5da0dd8241e506ea) +- [A Persona-Based Neural Conversation Model](https://gist.github.com/shagunsodhani/8ad464e7d0ea4c7c6ed5189ac4e44095) +- [“Why Should I Trust You?” Explaining the Predictions of Any Classifier](https://gist.github.com/shagunsodhani/bd744ab6c17a2289ca139ea586d1d65e) +- [Conditional Generative Adversarial Nets](https://gist.github.com/shagunsodhani/5d726334de3014defeeb701099a3b4b3) +- [Addressing the Rare Word Problem in Neural Machine Translation](https://gist.github.com/shagunsodhani/a18fe14b74c7292129c6c5ecb37f33b5) +- [Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models](https://gist.github.com/shagunsodhani/d32e665b27696ce0436c79174a136410) +- [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://gist.github.com/shagunsodhani/6ca136088f58d24f7b08056ec8b97595) +- [Improving Word Representations via Global Context and Multiple Word Prototypes](https://gist.github.com/shagunsodhani/1be86a9bcbd7f120ce55994dcd932bbf) +- [Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation](https://gist.github.com/shagunsodhani/9dccec626e68e495fd4577ecdca36b7b) +- [Skip-Thought Vectors](https://gist.github.com/shagunsodhani/4a4eb32de8cabf21bda9a4ada15c46e8) +- [Deep Convolutional Generative Adversarial Nets](https://gist.github.com/shagunsodhani/aa79796c70565e3761e86d0f932a3de5) +- [Generative Adversarial Nets](https://gist.github.com/shagunsodhani/1f9dc0444142be8bd8a7404a226880eb) +- [A Roadmap towards Machine Intelligence](https://gist.github.com/shagunsodhani/9928673525b1713c2d41fd0fac38f81f) +- [Smart Reply: Automated Response Suggestion for Email](https://gist.github.com/shagunsodhani/da411f15b71ed6a664f9d5ac46409b42) +- [Convolutional Neural Network For Sentence Classification](https://gist.github.com/shagunsodhani/9ae6d2364c278c97b1b2f4ec53255c56) +- [Conditional Image Generation with PixelCNN Decoders](https://gist.github.com/shagunsodhani/3cc7066ce7de051d769908b8fab11990) +- [Pixel Recurrent Neural Networks](https://gist.github.com/shagunsodhani/e741ebd5ba0e0fc0f49d7836e30891a7) +- [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https://gist.github.com/shagunsodhani/f48da7f77418aa22751ffed115779126) +- [Bag of Tricks for Efficient Text Classification](https://gist.github.com/shagunsodhani/432746f15889f7f4a798bf7f9ec4b7d8) +- [GloVe: Global Vectors for Word Representation](https://gist.github.com/shagunsodhani/efea5a42d17e0fcf18374df8e3e4b3e8) +- [SimRank: A Measure of Structural-Context Similarity](https://gist.github.com/shagunsodhani/6329486212643fd61f58a5a3eb5abb3c) +- [How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation](https://gist.github.com/shagunsodhani/f05748b6339ceff26420ceecfc79d58d) +- [Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge](https://gist.github.com/shagunsodhani/004d803bc021f579d4aa3b24cec5b994) +- [WikiReading : A Novel Large-scale Language Understanding Task over Wikipedia](https://gist.github.com/shagunsodhani/2788ac9dbcac5523cb8b2d0a3d70f2d2) +- [WikiQA: A challenge dataset for open-domain question answering](https://gist.github.com/shagunsodhani/7cf3677ff2b0028a33e6702fbd260bc5) +- [Teaching Machines to Read and Comprehend](https://gist.github.com/shagunsodhani/a863eb099bb7a1ab4831cd37bffffb04) +- [Evaluating Prerequisite Qualities for Learning End-to-end Dialog Systems](https://gist.github.com/shagunsodhani/5e7c40f61c18502eec2809e5cf1ead6b) +- [Recurrent Neural Network Regularization](https://gist.github.com/shagunsodhani/d66245692b276cd0b6dcbaf43e4211db) +- [Deep Math: Deep Sequence Models for Premise Selection](https://gist.github.com/shagunsodhani/d8387256f2bb08f39509600f9d7db498) +- [A Neural Conversational Model](https://gist.github.com/shagunsodhani/ec6835964df0e49fdef0459c8b334b94) +- [Key-Value Memory Networks for Directly Reading Documents](https://gist.github.com/shagunsodhani/a5e0baa075b4a917c0a69edc575772a8) +- [Advances In Optimizing Recurrent Networks](https://gist.github.com/shagunsodhani/75dc31e3c7999ad4a1edf4f289deaa88) +- [Query Regression Networks for Machine Comprehension](https://gist.github.com/shagunsodhani/93caa283af3c151372f4be86ed4c4b99) +- [Sequence to Sequence Learning with Neural Networks](https://gist.github.com/shagunsodhani/a2915921d7d0ac5cfd0e379025acfb9f) +- [The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training](https://gist.github.com/shagunsodhani/e3608ccf262d6e5a6b537128c917c92https://gist.github.com/shagunsodhani/bbbc739e6815ab6217e0cf0a8f706786c) +- [Question Answering with Subgraph Embeddings](https://gist.github.com/shagunsodhani/b65e299ff5f79a4f9da4a2e9281a0676) +- [Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks](https://gist.github.com/shagunsodhani/12691b76addf149a224c24ab64b5bdcc) +- [Visualizing Large-scale and High-dimensional Data](https://gist.github.com/shagunsodhani/6c267cf6122399e9be36491a2f510641) +- [Visualizing Data using t-SNE](https://gist.github.com/shagunsodhani/2153e01d026712ac94a2b4928a2dbf3e) +- [Curriculum Learning](https://gist.github.com/shagunsodhani/7e4e1c9817c46e3cb1932f62aac8806b) +- [End-To-End Memory Networks](https://gist.github.com/shagunsodhani/17881da05d9ee1f6539b2baa8067a6ef) +- [Memory Networks](https://gist.github.com/shagunsodhani/c7a03a47b3d709e7c592fa7011b0f33e) +- [Learning To Execute](https://gist.github.com/shagunsodhani/b44b29b86cdfe1b6bae4286253f76350) +- [Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud](https://gist.github.com/shagunsodhani/1bb05a7134c27cffa1e2f57dc6b1c136) +- [Large Scale Distributed Deep Networks](https://gist.github.com/shagunsodhani/5733fffe6b1a268998bd93f29ec9fbeb) +- [Efficient Estimation of Word Representations in Vector Space](https://gist.github.com/shagunsodhani/176a283e2c158a75a0a6) +- [Regularization and variable selection via the elastic net](https://gist.github.com/shagunsodhani/1cd5d136c8ca30432de5) +- [Fractional Max-Pooling](https://gist.github.com/shagunsodhani/ccfe3134f46fd3738aa0) +- [TAO: Facebook’s Distributed Data Store for the Social Graph](https://gist.github.com/shagunsodhani/1c91987c2a4a098fa9f1) +- [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://gist.github.com/shagunsodhani/4441216a298df0fe6ab0) +- [The Unified Logging Infrastructure for Data Analytics at Twitter](https://gist.github.com/shagunsodhani/0083f8a2d276e026b15c) +- [A Few Useful Things to Know about Machine Learning](https://gist.github.com/shagunsodhani/5c2cdfc269bf8aa50b72) +- [Hive – A Petabyte Scale Data Warehouse Using Hadoop](https://gist.github.com/shagunsodhani/b0651ade0dc39aeb7cfd) +- [Kafka: a Distributed Messaging System for Log Processing](https://medium.com/@shagun/notes-about-kafka-cc6c1b5c5025) +- [Power-law distributions in Empirical data](https://github.com/shagunsodhani/powerlaw/blob/master/paper/README.md) +- [Pregel: A System for Large-Scale Graph Processing](https://gist.github.com/shagunsodhani/af9677bdc79bb34be698) +- [GraphX: Unifying Data-Parallel and Graph-Parallel Analytics](https://gist.github.com/shagunsodhani/c72bc1928aeef40280c9) +- [Pig Latin: A Not-So-Foreign Language for Data Processing](https://medium.com/@shagun/pig-latin-e840ac23db93) +- [Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing](https://medium.com/@shagun/resilient-distributed-datasets-97c28c3a9411) +- [MapReduce: Simplified Data Processing on Large Clusters](https://medium.com/@shagun/mapreduce-1c88f8a7c3d2) +- [BigTable: A Distributed Storage System for Structured Data](https://medium.com/@shagun/bigtable-bf580262f030) +- [Spark SQL: Relational Data Processing in Spark](https://medium.com/@shagun/spark-sql-68a6fac271fe) +- [Spark: Cluster Computing with Working Sets](https://medium.com/@shagun/spark-8ca626d55d21) +- [Fast Data in the Era of Big Data: Twitter’s Real-Time Related Query Suggestion Architecture](https://medium.com/@shagun/fast-data-in-the-era-of-big-data-e6208e6d3575) +- [Scaling Memcache at Facebook](https://medium.com/@shagun/scaling-memcache-at-facebook-1ba77d71c082) +- [Dynamo: Amazon’s Highly Available Key-value Store](https://medium.com/@shagun/dynamo-9665c22a1ddb) +- [f4 : Facebook's Warm BLOB Storage System](https://medium.com/@shagun/f4-cba2f141cb0c) +- [A Theoretician’s Guide to the Experimental Analysis of Algorithms](https://medium.com/@shagun/dos-and-dont-s-of-research-fe33322c7aff) +- [Cuckoo Hashing](https://medium.com/@shagun/cuckoo-hashing-eb160dfab804) +- [Never Ending Learning](https://medium.com/@shagun/never-ending-learning-e7b78006e713) diff --git a/_site/assets/BatchNormalization/eq1.png b/_site/assets/BatchNormalization/eq1.png new file mode 100755 index 0000000000000000000000000000000000000000..d4620aff885d6a5fbd18e6f7f8fe78e1d4990a20 GIT binary patch literal 2726 zcmV;X3R(4uP)?%R000VYNkl6w#5Y%ja$){u_7%~ zi&(8%gsEDU1g#s03)jByrBf@{JVk-4N(oP%k>fVhoF(KbE1AoP zr*%;s1^jg-i|b&IW-28O*cJpHqZriwXjPpz^ftvfauMVyH;EW z*Q}Fz1+D1AWD+k37Z1H^r^#;kH5lmBkBO+nk4^27N8tDeEy)o%L`3}bsK#@XCQO=8 zXP1o(A#TP*fJL=@dJGOJE62A5E()NpyCt&xLa0xZq9D$>O7P( znVdzT{ZX0&Xw#)K0CJb%=m;GS720fny9^QlIOqgmYB*D*NZWn~s7=~-2Y`HVdvt;h zQl8t5 zV9yXa9$k_E?oRHvT}-U)AGvI3h*A(;7L1b78we)~+e}1>q4RQW_b>y2n>TNMFF<1C zG>`7PDX>in6zJUXWX}VSy_E?OFF=D=JYhnx;*6i!uy#1Arku0Fqv95eP&8N(v`g zorYMV=yIS}cWWk=Q1M&1wQ^Nu0>i%&%Z2S+_~ zjRE*dg=ip2pYWOAlR39|q<*)fR_f)kbVzw@U&}67_Ch~L zDXIenKxuuT0FXW39cbf;Z<3#J(V&=tR~A`Q`0fYX0-%qg;X|SG=}*Qk734Un7 zLSWV2@|jldg#HwGB?HzsbiE@G(FXtogob4+x+7uh+>sIUzKY!lRa*f7mC%i;U!*m2 z;O@<7iJulkaoI$460hs%dAI%SW9V)VbF-104Pq`J1TL-uZ~@Z=xTv#G!`*I|n{j+O zuFR){5J&)k3kU@|SyuD6=453Fw}rA7sehVxx8 z`#^7CJyo_vBN2Ug?paTn4nl}St}jD6XJ=(s9Cv$#k9v~s;=`)}=|cbr@rk@*`KXD` zGv~dmVp-xBy z(4aVZ2ro?6AB$16=VF`bwtomW0m<1P$vD*BHY0I>{_lCKj4C5{dwP(g4&?#~e%eTp1*U9%qsS6ZYo8(i~Ezn_8m zeK!-_t{45LHf?Ms*Dmlg5&7e8Zbc-MQ-)wb|7796s{zQsex8zxh5;ywV-@Pb@v1i$ z-#-KZRGc(z3iL#2OoD*Pk-!~*?g%8v2?FYK{ad|xTsJfu%Dnd702~-*C0mZ87}M{5 z8USERlxb7IADCuZ%DI9(K)1969ymN2C{TR4%ml?wgEeNXw?m8b-b&AL%SewWqFva>klphqB6V<7L)wPBJ+>-SqU&4d#oyZf z$OFfSXnA-mN%)mktiu};YlyCFNNpsdqU~yiNI%n7rEbU|s!rT+ge@-~cOV~NEKzy; z3s33B!dcG*Jf8KU1*eO#Wq7N9^xzeoN73(ZP3s%?f5Np}>)`%;F&l@r1ch+!)+)F^ z>xK3G^&^MgtC@nd5Lm5ZEp$ej`?hdijDAnLGP40?BGD{s^2Z`kgt72E+%~Iy)5dY` g)H-;-GyhZYKdg1B&$g>lasU7T07*qoM6N<$f=clp5dZ)H literal 0 HcmV?d00001 diff --git a/_site/assets/BatchNormalization/eq2.png b/_site/assets/BatchNormalization/eq2.png new file mode 100755 index 0000000000000000000000000000000000000000..435b17128014a7c144a89ec818b8b79ab81e9438 GIT binary patch literal 1331 zcmV-31{R3=rn@2UGAZn0fPldH%d_&b{}%lY8gB zycr$wl)y*ERE6nsG)Y=@I=dzT5lSZMu8p23Lk%vLdMUu&pu7=%Q3(B3ow(QRo2d`KiUK;&pa&B1;nR{Z4Nk@V@tB#bbWcRuq zBpJL**i9B6mOC8^jEM8rBuTOo3o}GS`r<$Ks>Loclxl4tf#@J-Hs|Jl)l9kB6JpXtW|$e;WNd>t%94O_kK zX>=l)IGbYvr=28urcIvN5Tz*4wrTYG9WIl}3tZfT5w^;EYKsnp0YILYEp5MT1`I$~ ziJN%K9{`ePd90B6;pnw8C>F4pbZ$FRr^7eQfWas5c7T^oZ#hkX!GMC*UJ9UP9sr>E z*ik*2JIsK=pgUm#NakAJ?j}iBv7JN48|28&SNEvcs;2Gc4Zn~iofmm-*#OzPdgEbt z@az^#fs<#pSV}#deoZ#-=4TwU1%E}d$@q}$T5HK5L8c0fOd-h%WO{CI97R(A_OK?Jp_Y^4m$t{h*F-%Z#$$+V(QrO+TKl; zfR=*o_0us}eMFL16t)flZ)V&--?mR-`4v%@hB5$c(F@m$cs=rtqyFwn0Q^C}Lb*6( z=dWVIH-7BcyQ?GtFmF!&dK=1DNeVkJoCRR-e;1)yfw7)B9r6PJgeLDznj{J^m3$B9 zNj|_*FJ}Mck-fLK(HeIbpUQ0u~y8 zjNWfF%^rgKeVn>gMh*HaTmN-Ba`b>3=`*xOUxTD(CfO)YRlBMhq^c}~hD?n7>C-1PIa#3kr%%vs9~TcI{Kp;jZv^6>K2d&>1B!q3&N&G{ z$oaZE`}S@4rgm6s0h(FA6ot7RPs_IG*h~w?JK?H;@IO7DJDvJXo47FVM-;g z{bpviJVa^j>bc{sn?7s57CZMl9~ooKWBB~}{?cnP^WSOi<0;)Nn4S4%ZRslCpgf&90;B&pc04>>(1XK}aIE2^TP$=V3K(+09M$(^vJ+Qo z;VI3k3`M`}Pk0C6D zKTDYKM0U{!~I32RUP$^M`?BGd@?N!ZQ0&Z;Idd(a6ch6~jn|yYf3FrARWeFK#WdeW)$y?m72^6CT2b zn6nQXg_!OMIgA^|-WkBLFwi(52SyH!9;T+|47pfDE!pn{qee0wG^Ghr#tcDAVrl&Q z@Y3+j>&L{jG|zaM(L9ewd%+8(j04!Q>K4qx(g8U#YO83%@S;DMQu*e_U&V|th5Rc$YmkJ{0)(8(-ayI5d!J>hr4YpEgXb3GsmTqxyj7<=&VsnsiexYH*DagKj zxiE(mNIi6uiS>*{Z(pD{e;V(+0TiPSge)WGPIJfo=tK;6uk>$xIpii6cs(5?IKNo1 z1(oDeJN4ZRWb%u37Y{G|VC*xaZ3eRo7D*JQ>&bqX4`cZPl%?}&1Yzm$Cz->O<>Ht| zPKjwfDgJm|yk&6&J{?fE;;l0CvYH0>!802)qb8LiP)jFB2Ii+9!>Cyhm~0Z5ve(jf zg#&+$*zR&DBjed@QvUsXtfplYED8yoEK?z-X(=xcH>s2F+r+*fLNa^_wu(I?z$lja zueNwapaw+W`$2I}X+f-izB{)R5i>c=l{e-lDX<#9rb;$pag=ZGZV28kUJK+_6myzP zXd)cZ(9m@LxzWxs8^dSk0Z5VZa$zM?99+7P2F>~}F1K9=chfOJb(i0ooQxv1-=Co^ z#IV@KCCZe5iEqfjJzUNe-+Q($%{n$2J16JBm%Cc^m*<+GEuHoVW6>AHryb+(vO1xo zV|SdVi3c3#Czc9foh+Zyhq2ZT19mS&Jw7r5R8Q_mOv-5RZH)Q}z^5a>_mpQdgx-%KHjit^ zYdI0K%!&JVW4HD4eb#Q*wOqt+*X+3^Rj0;^2YvQAF>@J*i!CQ&vJjyl=>5$Az|C-@ zEq#Bss-`-DFq)dG)8WRGmg*>@8fUf@8^eBFJ3LuZjgbV7e-J>?^;B*mvkk#xZ9+5M z3PNv{^~*T%$Zb)fc-y$*e^^(gj2#++d4Pn9L0UXWRQ=k>)D~lk`{>Ar>52OLh2zyK z#bITOckjX@<#rS^_z@7pVQlQ|!f1}Gs4Za2u1Z)E0s1G0z$xz;h4d~`{Kmo@yH3-c z>%3F&>q}ewZ|%TDlVpCJeFU9_&{e1YQnf;Ms|Ax2QmrWSSsB$u-%kP2@Z0@f@3=lb zBAZt&n5Z%yKI$+z536p&O`BqB;Vlv73ld?hEQ~Po0ozsv0i!&sJ%Qe2yxdP&Nn38F z3mF{E_ixXp9>2{uHlQ%&9$ui}(8q=|+Hx}+FB+aUi)&fvDnS#+l?SJqczX{wI7o^}=ewb&b~K3a^A21Eyg$6CVG@wK-{IfFeg=K;q`Kn9TX z@a&?)@fPv8e&-U^U&q0gW9GmdCa^-z?gZZlc(itiiJYhoGLse~)1JqjOZ8{pOQO~n z*(o6^04dKX{I;P;Li{)q+E9vn8L^zsg2Vhx3l=v+N5MJdhZ8*j&{)kdXA*xCYpi=hO$wdaLn%}u zkq@;`y?_O_QlswHvJ9q#mxxuM7Bm%=_w+O zwsay7Q(Yeuf&J1)@&yx)hzFlvG-Mo-OqGFs_x#H5x&^y8mYV>FfwrV{pX{KZHo zd1z_Gq_S|$VphmqX{;+X{PsKf3(-_NI zl?X#(qd^MSrhvR0^Y^!c0$j1i!ksAic4&~a_WJ=gb&KAiLHgMYSeYk2CQemwSB)Zm znqDh|pN}2%A7Ie8&|n?#zqS+&m4xfK8odf@nPlibDm8dK4F4jRTKp;bu=sj!s&n~? z#z!Oy7(&SDjVa-H`!E>L*<z=Dr~bgsOb^go%Hm_I_M-Yd&Eu zDiYkVLh_VAj)*5iX~pb%ViBso)!}mE^rP%lp(B*KmVp&qLn6uTwJTJly7FE{Y`ge^c!d2N!S! z7XV;nc&;45U`BlR?lVYRfZ2;Gi>OwhH+L|dUGPKhOqjYS{+aHKGq&Z9<^hAGh5XCE zNVzhlj0cHO#$%mPhHae8#~pwUr!)E-t8REtu=%8jGlKy+8=LTv3ntYy&1Nf~QMA2v zoP^~f=gV9EvrC(&5YPC8E><@w3D>E& z53kH+QBw>YVu@#SO+qd@Ha!h)@PHgLz@`-hSLyH8f1nB9{GL-vmL=v2l>HQ>NItJL-k$RJyX- z@&`IOv2x#y#C(QjMCK_6V9$!|>j9~zfMJoq=zZF7p0t|pTE7zd`)23n`UXuE?T8b| zdRySwww4%iI0CK|f1C=39ksBJJM3p}7bJySq^h=mBcW6J8Qiq49~|`h%t$KijX5rM zjhc0Q%9{jys5jEqq{7DDemncG9b$yULZb&lhtD+MMGD1g zB#wt@4!#sih0;YCwp_qnso1VKcnh^)TQDN4aA zML{Rj!$AYSLu9WqXQAx{9j&a@R{3)aR<6s(1RHN!bb0;r2XZ_Ytt7;fe;arxy^x1Nq-yRq+B1| zI8?tm4M5|H{nknkjoZhbKwbS8)^0u{OGCcM0bEE3WCf;3YTz6ZXC?l~>Q-nGaOlhL zYv3+~MT*JPfDz@VB{|Zw>6yF<*E5Gb*mE^qxyYAQxy*CuSFW#rGPUfU4}r@b9#UgB z`E>Sth+&F{JCfOoNk1Uiv;^elnKM1E3?St8Cv{dR`de)1wB;WL+q4XmK8-TE-y7m{ z&E9CiFE20u!0S%vNGr*)@GvZTyxatfA#y{~p+ywu>AkNh=~>lwBwxN%3#TIbNP?R} zwMWIoA|OD-9Laz^=`Df9#BsCchoYwDgDUU{V?rfyz_()FIP(?P!g{NgaU08qKiKEn z`WJJ_AM^P*-e%$QSASjH2$9N41=S**gWk1J85!aAJ(Ug@Vfc)K@T8~P!(Es>&sQOF zouBMCEgQgwku}KDbYuT(ZfjU7Qz4w3vB=-jcpY6sC+n*EBww^d>!w7=`Pe@D?+jqU z_U4XnDu@)FWcD;II`|)^C?UpP95w`MlJA<;=x{h(d{HFiLACr%Qe(I4#yu(G-_URo zUU)%MV!FbRl>v{Rf3#tgRf|3J;WR#p1L zkFYhL_DvT-*|nv72CTCr99pcwSpF6R1{M~ns3^l!$U6` zXbtqIDRF#4d60OgCbAqP?b*YE!+lI%8~-`8Eva49c`5VCKM?>H=}cq}=>YNAT29PT zdI=6UM=1$KJre$}S!Fh#lro<8VoGf<%;&r9e#DYaC-L2zHeCLPDiZvuP{n=i_HEHo z3pIzgGG_VJpF6*2ss0Q*hNaq!Hr%#F<1hy^^RUGASJq5PDwn+do0X0o%Lb<;EG-4N z=*Zsb>x0arg~vlu_{=P79X9*OhBHzd;^keRaw~*$B%_WBV~~mI-Eupu>xZqja(ZSu z=oI}c56UwNOJdlu%=&Bb^#h$y=jVAhyEg4~rdtZ{6U}gM#=>Jic{;H`?#S>EN#s-6 zz9HiWIt56gEPW-pdPeJk5THs%7m>`VEsg)+Q7D_Xxg+E#CzR4%t@c}%X4Y$f;$bnx z#YGE)n9^&A#Aa&Hn_IYgIT&>!I*%Afv$U$@HCyNiXIG@8_k@A230+}EDNPH~_$NKl z#iOq9y|)-qTfz9=ee|H`K^+niU+$|~PBp(9O!SjdOC>#b2CM;eHVHL=!x&BSwuEwTc08yhP0_gPCq3PYt?BF9%$-*xsV_Y5H zm4)Aj?5n*sHav;L0~}e4ZjV66tiMn>X2Jxn3A)cl!$tuTX!dO(?MOKL^cnXofBiP< zb@InBgXt?e90-rEw-hKjVOGsG2a&-(XB>?P_qK>$$d@N?44WC>Fp9W z4}#Xgt~u*N%{(4CqO;a6@$&wHF?g$jLqku)-`dnKyH z0cN~^y)m^|^}~<0={MxhB1UjA^kbk5-1)M<+ipu!(u3$gxyvr^)SXh;&-|YYj_oj8 ze0LQCoVUtQh1+kxkUU%w8AyPO*N^=V07Kio%uSvOu#S_txaR zj$YOa5;||gnwG(U{(~L0!dQY*7~a>lbqYHrB@PI`HT4XFGAhO>x5#4*On&{8Q=59` zT+#}|SQCM+Cej#>aEjETETO5NKbnh#D{=%YT- zWK0ErGb%IppnIUN+STx_>%4HVyN^CSuovEZ@8-}(=`ca`U#5R;vNvkt;EZBURpP`L zHY#lUlQRwO&TyI{K&MnPj4*5XbfOpI;SySf|AJrm1X0J4vyGLnMZmiw1d1fg0;7kR zIsn>1i|GU(W!>2l;*-gxXiHr?8**pD|7fD4= zoh>kf(?#1zr|KM`dvj%R6?FJdcq4?-}zBf9P*vWJ9NE!h}j z{!i~CwWvwCyKQLPzS!(tXz^fb`CYuxa_sztrU0PW(opjV%j@qF@`Fxy6HBW%`j z`x>-hNlA!$wICq9gUw=6{HYw9s|W6e%qT>mvK{{4YhPYoYvi~<{Ex3cq5A3n{t#{6 z%U%!OI*TtrEi_ET50~O}w8PnF0}i66(<3P-|zG@H{^LV=0Cx+el+n>?0>NL3{ooCYn5*+S)&Ui)Ow3#LTv2Ix9{he2oDu1 zirn(-p7xw=w@L)JYJxbHD6cp&J$C*-%ZSo3mPmu$P*U!xXj+dr_kSiSiX2pouf|8x zQg5WvIgKEuMPuev%4s$r(ef6~HkpM8DD%E z*S&vzDO&pET|iGuqa)QWnhME{FROZoulnMA=pHq)?FsG3lHoRKDf~$g-@slgfy*^e z61ZD~(^hQmr@*63TCo8xw^oKHMaLjzU7dw&jmEmeJiH>`k@1hL&uUThtQqQaxT|6b z|EW35xE0cLZR@CpO0;8_oI^p8?}E+nxe+zML9(HorGv1wB*k+Au(FY-<-ad5Zr&f5%%4~ z88NK)0uB#Wwlk-+d;^?VM*GHMMhh{$$>Biz_YLiks^=g1hmLOSloJ>noI z0`r*txLtzMSV(5>ucDIWc;?MV&Sgl#VkSCuG5Z}{*ru6aQ*)6)B3)0Td*p7s7l0G@ zJ1M1a22MRhTj|Ko48?aT++_Os&k;@he%X<6_sCZs*Pr1!u8ku`AuZt-&0RGbKaqhE#ZczM zSJJQId+>Cpdp>6g*lss zRcSwPXM?9MV_mZwQT)#58u*Qh^^$eJ17W2!zggWrnddK9MR{b7QARP3N9UwJJ|g=2+#Y>agsGCoPIoA34%Q)Rcu88?ip1RQLj3K(v zj5STC61JQYfIj{Mp4`i` zubqWvI;SR4(qWD7dv&9u71G8mu+jj2C{frHsSQtR`+a)q!+oBg!!x=Lw%Bf;rN@hE zU7$ssDw?WYOBN1pvh5rk=bI_7A2NYsV4o1!OC)&z>gzY}HWrwde0FAD3>A5ksZ9-l zY!b^vf%ujVHp=JO@Ik;O;lf=MbUFlzBDY_HY6U%0)(|I&Md-YH2%FsB5jm;zD>FI{X<9W5s-C%-GohQGFF zvRJ{3c6uuMvhA!=AY&z(oU-y^gt1^v9dy)7ZDmUoO$479{Tn!(3|&v@|F_uY(k$GD zPcu#BdDg07vdJUwj@^$5s-2QU=G$PF1_^cjcwr+Rd8hc;do|dx>Is(*0c0)i4so5_ z2YAB(g@f`7wdH;(=FN)f%oW@ckpi@0Vh&fMjrszda1+-av)N}0IT{RJxlex9ah#7e zV013u+#`Y%xj>zF3zVeRj6ZbO@l0*Nsv46&k?iiw`qBpvhxqWQlFibM=W$!&Oe5iMqkS`FmFP=;RTygzFH0aHrR74$g za^i3^G8)0q=N_A6VA+<6edc$yAgBdPV~&b?#5l(3>SWUQ-x4>+pG_zb!S+vPc!rX0 z48i}$D0WI{rG}k0+k=b&y{B)#ZH=d)qDr;*k{0QTaj@T6AmMO&W%%^r_faQtQ&TE` zaj~yTan^tD&vGNr*W6Ix(TKx-%(w%BpmbQ$9{6vB87IA$Jxu5=`H9Y3kY5%L*`{%Y z{r-<#)J6PfraTO7=U@py9?H_0%!1K_8;NJ;n@nC?;`rSkFyG#q`c)0JQ4*Geb8<*g zDLAm}tVVk_x*-e)DdXFlg78Ec_Oa;DjEiCRb1PecRe$0$fw?VbJzH31Vkc_&MiVkKWtTnhi2k2c`$5?B3(8E-*b z>NQv4fBXl9E{V%@YdGPX#EuzwSYnOasID^wMEtL`>E3 zg(Bd7w?ExqssF>xgI==b{jhE`9@>&hgTlgQC#!2XWMzwhhH^QeD&GI}#4sqU<-Q8d z=~j!6C5{bNS-Vb6@xdOu>6*yu5etVdHhPx>U=1SbO1AmRIp0G5Rzv9?aLJeqY>e$e zPCBaLNK-lQ`31Z0z$&2NY&%nLt>hp5@O78cvnI@%`z;yLRl_Kp|0#;{iUvtZL**~g zqwZocUAXZ5hx`V*CYjV|n+3D}0vYRALVkFZ&wNMvaG#@P%V*L`t24AXOI&dFG)%Vq zBgU2hS}CIipF&|KU;T7 zL-+#m-=5{!fi|VM;(nYM-@{QI16S_9zi{lsucbb4+VAe9yT-g96FGeB*5#u@Zks8{ z-?&Xb3QgMkpF69P55PtLP)ndl%lKib{sPMFY!VW!JiQ5a^GyDK79d|%t6A7ck&rLs zUcuspb%?9GzPZ7#dW&+XF9{pdb z(PM9_n``Ih1RYlDp{|Ym$)AVrP2MNr>!$<#-1v5GSo)%{%&}0*nh!!I+ve*pF}SBv z9RMG~4TR$D0U@WqtmuTbmU{Y5?hIz;Oz6%o1j3W^n4X$65k@q885%l)b98kACyL9wFrK&=qir)Q@G}1 z1hF2a`Sqx_6Vyu2MIwixsHPQxTvxK7Ai6oeiItVuq9U5NmVyFkg|yCuJ6%AWMjnsPIif zNn&ymnsbxl0PlCH+`-tzD2s4=V%@Elfr}%Wkmwl;Hh4UYU>lEZ{q;9;2a6g6qDvf) z%`N%bls6taRH{4mX%NsK9priFlOY>+Ge0;u_+l6~U>O)F>btiqkv}*nuGLmm0ms_Z zH0>pvvL0f{xAqyE9$}o$Xs=#d_|##}C;7xbR_$4e|55d0>>*_!#3$ti_}x<#00-FB zWQ}NnKoB__5+ga=6aJ2=$$Lk2wdIs|b@teRrvsyut}c;X7D-apPf%Y3d{I%c*}w)B zA!kv>u!J^}2p7%De$nnkeA&a&QnlAP`OD*YP~X{V>*YliMc>UKmuSdH0B3dV3mD!6 zoBo^pOD6it~7ka3q_ zGXte~kh39`=hnJ3{_#;sdZ8oIw>vmesgw$2b-%503OK54fZ@GMwD1=xzFUg^w3Bmt zl7@z4T9tYr$KTo2bq)w3VmGLC_g{X!eJGzqb*dT0uqZ@^lZgpCzeLjA42+iJ=L-F= zJ=^LJ9KVv6NhtsFSIpu@(rx#BaGjq>KbiQi%Zh=R-dtN!NXYKR;uc<-yH1 zpy_n{AziJj^SIi7VYN1VGQOJq8 zEPB`ijv!hK$3>+V1M@fwB+fm~e_EVaARGtaz-b;Bl;>jQ$bNsDqSJlgD4!v-@1^cm z?E-^!%1N1!NfXlYIAk>OJ~A^2hgg(2f7H35m`fzp7O$!|-(wn_xg!d!50R@E%fZwE zK*2-)@G+Dy)4C-SkAb`<#IV9@d8AHDhswm8bEZiz+^9UgBG{|hZkg@Q_DivF49bbK z$k`7D9U+M*&&(tBJ)91YM1M}y*7nJv9d{3Ss~Iv2C_MlEMySP!x+-imL%Njk_X(bc z02O@ywqMeGyXTzmeD3Z$lTlvtvq-TxmIjI?dga9cm1qH@qQ0uF09mYc-}>RralN-R z80Iz+**(xKj5SCFgWCS{+MmrC>uO8mDn6t?fBq2iIWbGe5(d}TyGPNqj`T8seigs} zE?oIX7?iH+F311Mk5gs(Vafc5W#ipPv5A0Mfuw0Zij+lP8;$*q(l$B{Rq;QnBIzhfPsN{4=QM6+1089@+wt$~ zrbS6CQM9aU(8`&}<*9u~sI%canzu0T+X-!k5os=glt<2+Ts@(6 z*+VrUbf(3snitpYc{ADa)Tu>DuPYt%#WoJ-6Crt78Py)FNy_5_sMFd1>Q1 zT&=@ijQs65!Gy;>o4V1gz?NAjvc4$dV`J92I1~CW{^k^el9m`+DTGd&!umzs3R##{ z64M+U#%*uzNY?1E9v+&Cml5#jE<1_n>Bt0``fbmC6{5i5eZa7@>rEsk0qOqCZ1@jV z!&*6ikO}Q0W7scq`#jCRlto-^+KaHzY$_5zKhw3REL8~ggiIJR74SGGsB=#=*;J6xh7We^+l}Sv0Mw0 zm~fG>CU5iGEAZ^KSaurDpv8$6Mf}W`q_`%UN*Ox9B-!$25PZ7?(!P%~sHWG5@@Dd> zfD{rnY0e(F_0?TLA&!E6x_yKZ0+ut;E&EHpzk9R?k)k0uuhI?0i~L-D7#f$sVaBkc zoQx$NF*0{zt{8#P!3lu;tfai7DD6^OZ^pAVBI!{}yD5rfqEY`-Iq4+ZJR=7S`o=f8 zz~&#dh!u>DO8r^hrN!U{9gB8EE@u}c z*FcZQbxo2Qn;=!TM*efMtSk9~TJ)*VkZTWH{`+?+E%d^o)v7YcZOYmmwG`OSPj~W-* zy^L!exv9?RGwB3zELS@sGe5Qe?}u;nIA06Z94? z_I5X~(`Lb;6KAyDa8D+iy zr=-k=qO#2S$(g@dDxq!c{E{GvbB=($zokhSUR221VHm{U(z|@2!J2KZa!qku6PXAW zXOemrT?mP6(S_#OwFzqWHLh(Y>2Rhga#b>FbOEJ?X$QMN?mB$ zk$(32I2+z>llpe*WO#FlGNtB%^8LX8>RV~75B|681@*+)UW-m^MOc_%Ok=ePHZ!*t zuJDhgHD`XqF7ih)-9e@!Gw-k^$mF@w&0$$X1;eUGs zA2&Zfi+$0{%EjSNqGy~CqY`t3D%Y*S4SJe3rqpTOoLba2eJV%cFx^o_Jx_^|ib*%S zEOw)@zL!`JhDo%bpH*CXshBtFd0%@sFJG!AN5Ig68Kc?3y9r(11ZSAimWukKVogFj zCnK}0?9Prvm9KkVm!~9^-HBL| z><(YVxM$}#AJdIPrBl9a=9%qS6i@i5%}@x%u8X{||MYv(Enn6}rgf0HtmatoI>6+d z63!4X|6KX+&CZC%WA=!r^v@s2Kn>2u(O!gtRaqtWi({Pk>8*fHqbV0WI!ib0G`%^v z)Ra(jg+N*T>DxX_&sFO2gcq1_=VmY@yTU=AY@ybZXRlr410hF&@4*m4j(WnvsGPg8 zqxevS&+PZ@_mdLQqD^^G9b&5q$M2NxL3@j(e&7B+3{h!aMM`P6upsjCnfF%)@Vgb^ zw?8L!Yb5rY>sBP!Oc0?cSPs7cJAQiKoHJw!qrFDQn5J`+dt$~oXhaF?vLx5s4CJf{ z<3-}|YSPvM(?FE^8|;Yq^gJO+yOLqC4isSvQk~xH7e74@sU&dk5EkQA2;7&P(X0%mVSDAPUi2k=V3Hv9L zq8lRU7$G?B!tUJ$)2D^}hfpa<(rc@#M>%p~42rCweUSr8^J{BUo`nh0+v834g2rsv zQ3NIHKYw`X^qnq9t3Ed+3lGw+Wvi;OVH`_qqZv>ROJDW(dI{R^a#r_BoODUJ}`tM-%eH<-6e@btla6=EpDO?a<#Czr-i zbQly&BsPdFm@mD&pQNfY9BZ2{&&x=~JcI+U9fT1KHj{;6AkqvOguGs?=nr3x2`O;c z+yHL|=GzU_$g|~uw135vWIQ$V+G>f{15L1#<$kcV4NLUb7KK~TOw5IB(<^W6Y8&;> z)pC-G8O4xi9uo4I0_YTTqdTNtWib{WIIEQK2bygovezn&YT>WbPrwO*FJg#``t19o z1l8_`da=YD_J4Iy6_nRV|4|K--P;hW?E-9B=iGw!vN847G@94SZ7PvUv)8&r*UvUBl@`15I_jEAN@#nd*)cr!b0L$< zC*-=#CtJE1{H}l#WsIarbPNINX6C?jJm&RUOO&5}X99p38E;hTVfYTWyV--JRJiQb z3qiU$anA7P*#DRr9e-xbZ~LXhA4%g#nukX?3pJJkbC~(Ze9Rak!sSnhAjJToy)pgo zWzIpEtU_X|s-hu3dAhX8xh{S$wd`}M-rmbB3^XYuqeE*i`cU1LKqQy*Z1cA$; zoZafLP5GtgUv}{Jhy@760TrgWcZcD)+211~cpr#geG3qJxZ};a)jC4c_IQh|ihAwA z-L}E_SvjNw48-4sg1iaP-=6pma&yCRo4Q1TkF{7Jq=Mt^o-BK88U0*Nw3E^}>9A$Q z(zBB|rB%*h#9Gyb?p&44?w@gEQqPXV7DEQWhh0&T4%d|Q^d!_V&nBDC!vmidBSy0o zI7;UkN;$Du0*kPmD}tVVwJR8~g*M1c>5 zel|gb_UUYWTl=TF3re8{^_K&MJI4!==;gCQ?uUw<{d`L=VPnmRvBKq<=Z_Ki zHc;r&oZ`Bz#8>ESg#P(sJy2j1PG}9fGFQyvK)HW;(Uz4{nIxGi#d4tir`u+Ov^QoL z-BQeJanJaD^sr0p{Rxq`LO=X(0FO;cRdby`0k7kF2tv?1B9v;#sDa)wNKQkmF?8s;!uwLtH_5yDw4J&hgp& zM9#6f^7wY>21ZcjC9fUPhB}QKaBa;tqDL4rk{ycFWF#~_TeJm0>Zs}#EiI#ON-pcp=Ed`Xb_sqkWD8Mp75h&Q0D2QpP zH+vFCgvQ?k4co48_0t61Ul2dX&*iaTO4lyK;r|`Q|H&MWD}p$PqS*SR*CybA>x+j^ zrL@%nN4naC_g52KA=XNu(Vy0Zt$Fa|xv(&$@}roo#S>UPrJ{(Ve}5)>Kj9S{v3>c) zs_!onI$alYyn`{=e+30Ka>2dJWZ8|RUz7|# zX&c>=y_~mW6x@2#cD@I62B5~Q|4JBkdcyBZAc4lfk~NkTk+1-^bKV=aHM+o^b-qKP zG7Ly-3CejNyTVpx#Qu8oharC*Hov$bkJ4Ri-=L*VC9=&Pz_hyWc>ei3nT}{nd|2>B z5=|Q4Dy3NSq9jNpCL+y~^TqcAM8yIsgz)Vhe0RqXKU@#jpM{~H5Nl8aq=0UAUW&Q= z#Mxd>{adu>CQtKh{aqdArd|Rc7Vy5UGhfu<*)6%u*U?p zrrjf)N?*(QD+>}E6#DHPsAF^|+mX9XFg?i!$%31#y>xoWziQ2!j6Ew<@4R|lR5%+4 z6Inbuuj)c&kHX2FM)gkW7@(iimr0CQOvid`gVz;9O0h0?$7Gz&NIHKFG39d4%=9{W z^a7S}0u9^S5K-X0ioDtR{}JN%`Dg+vM^pLTZ1^xRz34o86RCPX{^-++DwF2dHHVsY zpyH{yk^IVjl$Sv>2DTMfiQOqeNQjc~q@zZ zRv9%RzakJ&|MuervqC5fg^iSYI$ocvoyVbKPsQ+9y^yPpsE6>q2V95+ip208{{YR+ zQL?i160@=+%Bubm7e-)!i#1VP_g6a$_H~M^MWPkhT2Z_2h+|1ak}P<8j{ETLnIs0k z`;TP5^S(XpC|Ee~NZ=L4GwX0h)PE6L%M`;fHAYTuwJz|h{Vpp@lUY}gr^40&UY;Tc zPcG5bi;v5)9W4{wuLfaMwj6za*-UuHV46FnbUOlv$q`VAEeFrgOj$^^t0tz$3AmrS zarVsxRKsmY4RDBD~|4`O? z4^a95JTZ)DM4j)`zc{Q;EKrFAacfKKxMZ-gzR17_+)uiQK7ZzQT8V-?X9flaVsR!b z`Mfp%Us4=?E4N?)8<}wy#agi!<(yTbevdM(g9~F?rl?n9$PMzhqIqvgKh9hi1?FF` zn}t*$$UwY)(%RI4bvZ)pb=%`cb{y>05r89&dJYq2ubPo989frJD!a2Q*z9Y*Mk|LA ziw~6k`BmNDu1%KBTu#C)=9l*dyZ#q5a;ep(0|0~=ov&xe%)JNu?e>Am+vDxIc|W-I zqALC8_o7+z_c=#!Gx0{Ex{ry+!o$*J6m&L^1Sto&Y)~c&wRVIv&4s|UJar&VDdxh& z`p!oct=oBe+-u@dtMBmT6p@E%q3IrzxL8>tYeDVddt6&X+-zU1nf{`3(X>TkN_ie+ z9eI=jyjI(8)5agdoTa1l2kzIB#K*9#qQlw^-a*wWQx?UWBYf3x(sSBkg0&K0FQWuv zI9rjkX0T(W;HR7I&M7f0H^ShQ#7UBg>>uJI==qQ8)Y}lTzCVo~{IuHDT|w1ylCFVh zYlTim6y%#vIJ+P{?YPUd%%iiVqXd&^@n(lwWbFWoE@9#4Dy#IHXqumIW%4?ZS2d=; z@3<@e5|Gs9lFKxRPd@N*c5n!j#7|ldqLiMU0XUwSk=mnQJaa;A-ALDnyWeyTfKn!# zTahzu$#e_KHQ<;wDiU1@<4a%O9Cn1mgQIjr4YgHin^%phrg_So25_7Ye@_;PJ*?IX@Qn*x+E z`n3`MvwHzhIU`6dug5G0aa6RS7iV*BKOA+!C7-SH3$7SgJ5G>yZU==LE~-JjW|A}u zvA-1vHv8}2kzZvDBZPuHF|JYw56kw|OXxSapk>{q+z#fq-WR;BNRCZR8nESR5uc79w%0BE7PzYksRA{MKM@GfesfqJy1t$+1k*b>PO25w(gb_v_YyCtkJ?q+_=4 z`${jN1xK?;#vsx{c06C+JEBk}(X=M#TcLRXWSuqNRUUptz@W-x`Ca5GDNl?wWb#R2 z?PsL3agaz0-hOa1@d|SxTfrsmE+H{7aZ*~Ev^jSEl<1>n;{j#=ERfFeWutUegfJ(Cj_4F zPU%9UE1+$VUUg|}VI-@IZo10Ls`|n*>#z#4y-!B!QOw>e)u(8cSDrWCH-jkxVJaGU zT%tEXnoycyqnlyip(I3nfV=NuT0AZKq=A-*WCrmB_%Uw5&F(A4&hF=I`cJ?Iy<Q`%EkEIP_cNh`L~MjIPi-Q1x#7y?)CGdo6R@>}flK*;{fv|_|oc@H59*2bX)dbCxzdUWqT-;U}M}Fyw)OUVmeYzhltg;VY zUmuep%fDt}K4c&k|9@2dV|b-aussgP$xJxm#I|kQww+9D+u51ewylZHiEZ1q{(GKt z&ilLG{*W)(`@Zho-Bn$+YSn5ABmr%0Y)l{9{t7uq{Jk<{%;0e^3%7bOmn`4e)zv+Y zzf$-!oCqc06P1}{4K*V@vnJT@17yO5Z0mSw7=(YA*eJ>^Y)VOW1tApYtiauwnFvb? z`;+4%Ogt~H-NUa}4VvNzrCvvO=Fc{=a3&3R`~OR(k&y}@u>e!`Fh3be>5G5Uh9)GQ zm8-5Puo$p>_k1{PjUNi&LQaY%iQla_pnQEpycCU%BNYoCDGx#Rcirbm$JR0+>^mwp zZco@XM<7pz>*cjkLLJ@UG2smxWqP{4g|zkveo!>Me)Bok$IIzMrRI7N561H%t7V+O zKFEs%=l)?PxTs^8kPi=7Zgt|<3It6@S{LcDASxIe&DXO>MMvZ*G)QPCy>sV=%Gg2M zKS7QB#B6w0O{P4PHRgo7=ID@+8|w+bA>J;+FTD_f)boDbdLGXC44)vQ+xMbbNl7svIB;} zM0!%H&)#ofW9(n)kXks88Hyfc(rkWAu51D2YM2PJ zPq}wKdLB(hWmlE3(ZC*faglm=2uS^UG)hdq8d}8ly57Zxt^HCU4V10osI5YkMz0!} zEh<`Wc&qh=7ZgxdhVL`kRm0_t0*6M-M`LWN?mf7@uB$~r0jd{dE;C+fcx51dvMF_R zm}CS81`3%Jf&E6~zJ9jz4Y*iyyG~c9pVEyw5UC6_A;ss^nn1g&JKbAu?*{ulA!{H_ zRC%aO(Ir2R*!NOHkq%INV`N*tYViEAGHEaA!g2sl?#(=&a*-py3`q zj=S;#iubiK1q9p;oUb-8c%fx`lK#9y{Ps`$Yf)$co&}et(yZpF8jLVD*$yxtG ziUc?wdo*4zPd{Ho4Pj;|Q9l!NR( z0nZKW2M7!p6lOdOj7w#(g?N(7s)ud@Rh?5%Q3-~n&m6H*wAp$G?fQDCaF{Vo+5O}9 zi5;rTfZNdRiUNf|A+QkO$n(72X<=P1EN6?ZrHxf*J*qYC@#f=O<*HB2R-+qXjK*q$ z$()i^4eH~({;3OH>wO`1zVau=>x0c&CkT;GBSuZ1`u9fY2@9jCiCG*Z{oYjf7b}@A z1Kw8m2VLVc>Uh_!=RHyWFjaN;wp|cjM1?ijB^k#M;mW0EXdIvN3_)-xMKc$G4qLXuN@oKO0e?zy?%} z2hyJvbC7m1crB-G*w+m?Utk;SuA7oUq)`^>@EH2Oq`r<*@{059_|5k$M)Ne51K&q4 zMSKN+T(;fGt-HQmNJvR7!h1MrE)0kZGTb5sYZPL8nOUbYxiEzyj)-b2L+bh7uy$PU z^4t>%2dN(a@K_caw+6e4#8Mx66gl~h)tno*y4yo_l&l&ZHb?2 zAsdT?k90HSR=qTa5sWxO!&_#DR=l_OvB_P2%Ko#(?kXJJqDMy?aA&ctJ+@0bXcsQs=I!%wYf=)nbi;_ zUUG{;%!IAkw!<3_%lS``84TXyiacf$LZ$=Rti(bTdUNiJ} z19G+imG`zSALLJhWbpjQ_CP z?3;a$xb3lVNdW5VHZ1k=E|Tm-!*?JLG2UHl)6KRWQJJnA@~YzvZ!`Oe?VW}5Hx?UQ z!pWahRx9*%I&M^1MA#_Wn#kfwb_c9$^pa^GS+%ZbjO~H)Qgw8)-)Df0c2}dlP$eC= zv5lYnW*rXb8R#4L-l3ckRz#nPljZYmog7N4y-2s!*PO|rdt zr&^^X^cOCw<7qLAiy)OVuI{{GafH-Yxu!q*J4#CVO@iji!-S4y#_Z+=z6&=C!LX{iRNnsI!~z03KCHvBpFZj5)2)z^`@7wurzNgh(gv#OR^wOz&BZx)L9ww zmlYtb*MnP9>S^q|1~)YI zf-kTRGkm?2%-EaY7t3Y@#`T8^=Q)*^-(QDmUv&T;!t+4QjqaB6tHMc6UsewBX0ZS7rcqG1z`g8|Tj;FnyT* zDs-PY(%Atty(x|f}e4(V@RGpJpEvz7l|W8yogs{VuU zrvK8_@nD>)v!vL3&o1U8A+c?fIWAsr`L3+hPY$8Sc+mhZJYuAgVk^|>`;#Ws@=S_wF zM|1*78Y>>Vl5Ea>ruJj^9Q5Ds0jC`&IOAA*pEo4-h-$l{6Io$biG;eDDy5U2{}2kw7n zUfAgy0gS{@caIvwj~61eag>L{Tw1Nruo#mEgB9X%r?7TPPzWkGV$^kiQ5V9R7qQwz z+CbL1+oXc%Us48Qx#=r;^Jx;ckTa@w_rb zPo_p}>Tw5$O5t$1!x+xehsCHH-3`H0;IM$;$s-j+dfn;oicB~(HnUDwAc6I|ss+H5 zt&nULDxX&-X(+=6ikzl{+blO!WWrFx(vk#CWk(qj4{cYkUG-e~K8Zhv4|Kh}R=7J& zo+$)*Rt=Bev*qwv4o#3;DE!?2}X^u~c7xNm3UJIJ@+6vbs{ zN449w<92#I1Wp+JCRfX<>ds0EP((%3#eLFUfbg0^c23hX;R;T&f2l7+{=bH)BlHN> zNjWwA4lxwUrJp|nT|H-eA|lIAOd>Uqte~dIkV+;*rL<=Oo2xh-VhO6v?|zS6U<}n% zUSCmzyq3l`>4nLSvK#Q_^QC*uwKVlKG}K=Q!+lui^$Hrv==zlkt3q*LXs2Qux@A^W zlB**kWXxZ0S?qSVW=swll9%eCY2#jZ@0)KM#9KdthyUWT5BS!}c1ri*o>gy*?EJ*( z+_*1{XM$icW=Eg7A!=9vOJQIdV)+I_5m|-I?mfh((hB~7<1HhmW&=&B(yTr{AV~>V zjl_`rYOg~l`~I6_iAcKZ=bp2U16Vb>Nh1}L8&|SSZ|lwm$zu%I7F5b;v+L)CWCOu( zBi+pEWuD>Q9XHSRA-p@&&gucuqE>liWUXrgS*Z%K3WjSDa(y2APgIm2a$$aa(tLTe zCy!oNCC#G&nUY%ZvcHsRhCQ=7r~Q93m%u^_a(M4TTyWSYx?Zk|o~hLo8!$>{FnE%P zvH}We*P4~(vePE@+5~)F?+dGOl`LQSL|MERbzU{k5giH8O=1tx0DJ_geL_Jny0DGd zb9405fsg=;nS9emuVKQ!W3;)~(+ttZ0BKu6$U~xEKT-(7S3S5KUs!P0Thr94G{jRT zGxO#L&{=HHEmr@0f#7m_h$*n*GreM3j!gJaGSS4k+Udvxy7fh{*{qAGS0J5fE5g0O z-ytG?VG-R%v!cn)%deG@9#kdC`g^Em=fKCS1^Uk7m z5)FCTCnvL(z{6yA#o>LKw<}W>5|T`#ov}qph^L_A3Z>KK8ouc~IZ5Hn?4#$1YI8o5 z>Gph1ao*y6QqZw()IQTt)n=`KGrYOS7!FyTxy=6Nwk7yW^5l)+Z}k%u6ZsG}rUlW= zykfp~Ash6PS>7M{F+$uP$ij`6JLf?eYls9xtMmx&A3>>|fXPW{Px%*}do9^~3#4wlW6*f(t_AGF{@IcEr=jot@m!3ux3^Oy1Ej*xAT!HZn4RTY zR}d?Mc3M>j(wIjB>Qn#EB;*dL&o3$Me2An z2o;N%qP+?PJoaozo10wzSI4s()yQN~ZLWpjeI(@^x(MM!ewls@!u@lQ@@PE3S`Bb) zd~12t%iH)~qUyXo!YIg6U@?`by`sr^YBJx)PetqIBUcYQM z*!XtaI|Qk2@88H;q;uDX?8;Mk8x&IOi0bU6uAaJRI2`ujlK`LJIGj2wa**hp_f^N* zlUeNo*zC52pC50^jRe_b2~UMc5=7*LJ76{m(^}Ok$2!_k)ytM2yG9}5C#rJ^Sq{hN zuH;$L?_pBumA{ruT~=}?Upkw&IpniEy$>q2FPV;t1_P&0&TqjyF9g?|S2vQJbS}Hq z{f^o?h8P{|p{+9&axhA2mNn&1_a}y2uZ=5oEJ6OVqxU3_I>+3@?($7BTA_F`=1KS(*l;7{?#I%}z zk)8p$EsTMEeL~9ROf>mI#kl;1+wor#c;@E{+s zbs&S4R2EBxB))e;n6i(|(smByA5u8mu{&#C6jbR6$3|-I_aCP2wsM04I?K!AST|Mj zMxA3A3VphGU=Hn3TOGt9kq=Qm;1W%?cITDsEa z~e~imk*oVNgkl-nee0}{*8GsYf^L$aAi!m%hL)rh4fku_+~cJo4#WLht(}h z8y&6`QxpCnBJ!CnCPlh5)lr;Q@_49)n_sk@^tus2tu*OlnGHs>Z$~ zt@GKaO16>*75ImGu8MfNwGU4h!@P>}qPmVImjk-0eX+(0NdT~cO(@}_(!wi6<0u@p zByX2@p1Vq)GjX}(zNlNNtHEx{q$H*YVf?_ZU~Wqlr+$p*F&>i4kt|Kjh4Y?e<3sN* ztAbRbC|AxWRq#a(`{Squ!6kPm^xTVt;rR_(A&1Y0l731Co2~MQ57AbQL=FuN&0gty z{1UK}P9P#8q7qC&!-5bW|EfTA(`^9xzrh+pwlpk&<&pJE``~XaZIZCcy{d76Mtgd`hziLlg{k=!#6PU@mK~CfS8y)xCL76=6WlNlnE2Ln6d zXMrxBaHf&atEU+Lilt8?5fq7YzWsAQA*S_w(Kv=z&cUJ1qz*MT1n6)>yg4wo(4~TFK)lJbth~_e_WU(RFEJ+`By2C4eD6}wn+k@ z02Qlke~;kYUq&=nGgMNg-* zT2WLflVkDPqWYCAok^C|_$Rk$t?B6Iow}h_s;^9V_1_WxH+%ow!h%q{>&0khjyxn zkgf}%5k>zyGM?+9Ke_jS@3XH+=5Uzltp`nlaD}WI*7$$RtRR#xR;z_tuVzhIoR%}W zFZ6)O|IRS@C(0N$gOffd8joA7MlM54I1K)P=+S}Jh?8#8ge7jXl)4>P3ZN4LJk*7R z@-2lQ{gIKIz>wMk1HoRc7ioaXKi_3DS>lwGl`M}YvsiqFoKIJas5icPK2VX7g+f9? zP;syd8@q2;b4eW0g#Le)A|)}82Qky;N*aVtuO$}##o(}N5 zIxgGd6UCS+QC5)1CHkbsl&BKW6Z_w9o%Y-Q=5{@&;Qf5wW3ikiN#}B*5*=nG2Nb0y zV2j1zP1Fu^`=MiI>HS`$+iXs4tRn!? zHkaWkL3%hePS3vkBXRdROP~F>#9Ra8f@41& zZxEz$V6X;!eY|DfZCQS%wZQ*mh>ed=-~^!6Io78OPY@o6|33q;IbUnY0al>VP8ZrS zHoUGnQ8CdJcgdO}zaniS4`hS-@`chl@07Vbb|HW{VFKzFQS_Q&SF8U!Sa~Tm`C1Lh ze~t{N9y&3$lntFugCsmXdzQ;3KXv(MbA2yZ*tx13$N@;**!bUkcD~B5F+<(fzBdu`{3`U@{(M zl-Lox1`X-i@dMWl5dByV8{O{+Y@=V=qdMatB_TQY7t^ZyC@9t>OlVlvqFJp;8g!IK zrx>fu?_IH0lOx*x$<&Fg9x6d02Lq#o45o1SH8!0CvGg!A3!U5A+4)h_ z3~EE}_XTouDeUX(tD^6l*V1Hk!@RN2YCjkZ>76JxhdMYaCBKeL^cca#m1?T5DmmN1Rz(z6W zgTH9LcAFio1bS*W+NNSxkwE>4lJ20OAl#gpbye!@8H?=?qxNUKHCYJ@Q9;1~nagYL zSYB_o@aV3Yc+M4Ysd+S=@JvyeRNM8IUK?x_x<=K;?(ry@$!$0U1mueeeap4{-+SEN zhpuCO7!8n6rT&LIX8W*{^-XXST6b_4) z!)ma1p43QANEmCdp`8&r!KM`$Y3y-r@_&Gb_z@(bmu&>uV-TN|GzbpYtgE6jGFE$4 zTdNQW0F*pGy9*xQ&zGxGJLr(rkUCW>tJCv9M!bUIMW-YA+^iXk`xXOI=w^Z5l9KlW zxyfQCkoNLa6~$c9g#(ZO~L^wY>FW!MioqWcC9>1WRXwRq!}73Nav zwq?LYZ={>#`Q3ZSopst6vIw2}J)ydA&CQjgqtgf(ne(#ma5Sz*_c6P(n&ffEkAG*a zqA)5NT7=w+hEyjG$PXd$a#35`>f?F&a5>(r$Ez**yu2Ul9>#~ipXN&Bl~q)jrDUu{ zXJ(Ahe`7iuAVEUDa5pPPM$HSARJ2(O{;bjK6f>X9Gz&Cm_?gyIc6BtBQxY5u@oS^4 ziDQx?G9u#ON>{gb)%XFTFC|BxLZTe86^k*8qUBVrnc|YXrHcUsDhy)5TOs9|&mvp9 zDG^mC4=w}w3rd}wOs}=o%$8n#@T_kTHZd-4v?8FEu$p!&bEhTV?$_sYv60k}^w4%p63X2h-`lfNwHu&8GbI`H^Nl zG&poq7#$;OL$bKaO$I!C{&@&ikaT&6I^+pBCg{>jIV5fxaK=E$^N~2>Myn;dzhh%4 z`{bc1NBjFQN{IuwXMPsZnoaxD5)Q@)FtNTPB|P;8$*u)a_qH{q!aesLDY## zXh>E&oe+HzKepoH^C1N4b?Vb zeS@xTtV^0m%3O)a7&M~I($#_n(VSbOJq6~_mFnV;<;AjkJUNqjIQBEyyk5!O!j-yp zMX9Temi;Kr%PksRWCYFe6bAs#n z^5`P82L2A-qG|o+|ri! z#M9YQ_h*xH6`SGy5nAor{c&7Dqt7q`5Ga8`!6HF`fPz$dbG@KsqKZri28s4-(7vdz zp=K=MSwRtjxg{X*5i980RMm;I1_j_m!`G8?;Nbv>0Smr7z)*w7u=mw|u#XCwDC`c} zATJb+kCQSj6dr%b=I`*nGGChTTHWf#Q2hM@*Z<5%d6TS523Trb&^o1qjao7ssHG+_ zyJ?r*2^A0m(GwIDw9ZrJvNOdlAAm7n$W?DZiTwU7Yx8Hmtv;%^$n$U&2Ir7)}qRfYWxsvaWY66H3I8$(E)P3S3|w2D?kOG!`^?f zQj_>n9Av}y?P9)Clf-V00tR;^#rTB>h>qoFNJKuO!YD(&X*(NP34tBcjm5?@mAH7l~nd8ozGF^1qVl6sh%235Q3$zMY>dv@+{xVZkT^G1BTfDaABG6-*~ zREhF^RbU4?sd1wuQmfJm` zfz~gHCytPy{Z|HtRvTPpO2fHrYVxM}UIRNT7@X2g0am=(dZ(S8? z?2+$e0<|qc4GC+@YZd7i<)p3;A@z@{qi6aAugp5k?i#7Fs;F4` zS|k?U23Dn43Quo)T}J~h6u0M zqX`klj~_d+1cI5FY%hWnzf^8XYg4v^? z!CNx@K3~jV`|&?}kg09fn%zi5EDRY^kFR%x8|`<5B7grjUV)u4Vql=yVKsSC%~ew-lBm!~4QOD2~tGGUxQFnWw?iJ9Qm z8c0$O)uM($t0OYSa|b6sA6v9iBCmhC+C)uC(n!z^(mtrcgGk^ot-lxXJs$NJ@|CLo zg>DHFtH!Q?$fnm5@85~Bg6UX=?+A(=P%_)+I|y3H&;e0YzQy}ssk^l|nwxgw-=TZa)^fURKPKVQZ z!eQ>o-;kmISPBS$Xs_zH?5+z^ODZYe2D2A%r8@xCsLWwEqA=%9CfK+3&j6ZT$l*2z z``-tqt2FB3fgKPMZW}Ho#|qpWQt6b*KTJy`)_Y#VE7pM8s~bQK+xwj`{O|G0$;c?$ zT#w(V@pqrmBSGKaSDY#Xb5`@fWwUuAMvG68%We#4XRd2eFg22K9JIQ2)0i+E$4M${ z39iGp&VkvScI-wgr0TLxL*|BQk=h-{8C?1^9Smys%c3?gOFq)o?i<_DYxaYR5RO{U zVBR$O80r4FutP=18xl!ob{w-G+~pFEa{uc3xE~#t=Kws(`@NzxSuUmT4L2fFxRlG5 z`~|kfaZ>97A|Gj!@p+x2(7#$znd2CtL?T8Bm{J9AVPO;ZkpyH>dpUg{7S!+O+Fb6* z83iR|q)O*ZC?)qv(s`yK3_NIPQG23mGl{c0V_DOv^ZZf4ZO?TW*HHPPu!bR;?pSE}w0XVgU??Q#p(< zVQ_NI=BstzP-N4XLVzu-;=*B@=Z*9OQ)Lhi52sD{=Q*D-d%3Y$-yMwMnBoUmkxLCF zlaVn9ff0Z<<8-R!=m1y%oJ%)62COP2QHf9mta^}f6^ccSbg>+X zC3N2Uc;dou|63Djo#^tx^>O8Q)gXsQlRTm#x~5>NYK=|>S(W&Ggy5H;-D+cmneXj4 z^&kOc9J8%E8M0^1aiC<*FfZ$wH>R2_-a+i>R#okPk-G(+63y)^C=UA^u$tKucZyQ6=@3eh|$h& zq0qsu!^w~MW}7uFJCbO5AlegAi~zAuoPZ30a03?(2^tV2D!$+o1S?N1NTkRO5+IDR zFDGnwwusB(bVz=F8roC|6cMxL3@K$|3URw!9}TW@I-0cB1;R5%CkCeeW6syxpmJ+L zq;ix&X8E|>t}_7oE(stXAkX~k3X{v_77kI3*Xus_?Xtb*mPGE?7%LA|3$(pY-Xvw^ zS?vJu>ZLf(r^|!vp4K%vV`9fPWNFORMc_p}URa!t5c(K8_^}*LzI@ud(?gqHS3Y#i zp^irHPC-JpDyO-mj;C&F@KyGX%CIK7?UOqQ+yl)eGHK%^Oe{QwGEC}tqkErL^`{{+tB(K(C}@<73Dt4Bi-@_cR1ES#0^yZZuPUx?WE!SS+RqqJxz81rxW;EM+Tv2nc?TB3=3yh69~p9qikjJU<%q zt8}I)%#y2;5|hnVb*wE!u`mh(m#;kk2RE7b)20`F%6H}6ND$lk74pr;n*f4>ITelarMy8yj_8hll#EL|%Qcrj<@FL2xR4I`7mnz|&< z(&NK`hgxvas+4KA_fxP0d4f~Rjf#Ss6w2a#b3z#KyY7W9GcHjfBdcDs_jdJZ_0})v ziipOF9w>XO8RtshWBK{8Z_ISFf1j)^HhX_EH@Y+&wyQnhgcgp;yje&rjxhIj)h!QH zzO?pm*_g>*o=gSI=KEMdkOy&)o4L#DGAS1&1dY-m?i~pJ(F5py(bbs zSYm9&BC6soU+PVD7Rc&UiTK@M(cgPUwGlsV2_10vP9M@@(K#c?z5!LXdvSR^6Bnzr z#es&JH=021^n!WZ)06AfM^z$WaDesqt1gtavxzeU1wlB( z>;UPLlx5@=n=>}g!>`X6qrC#n)|?7H36`JBR0XWbfZi|1S2Qk{R~DbUhb0*q--il) zMQ7)zT`cDX(27#c_UKsK^+fnctV{gl;*NEmNxU#$BU?EHo6Y*P&$q2FrNmT%B-6P4 zse@oZqG{cHB!qDPkIO}1=PXR%d?uM(<_H=yLz>c6nbsEiH|VeTC4S9uul47;zd~;4 z^j?mz@hpp9joHaH8gdC_vgjgRfL(M~&g}0yZQ09550%)N<2MHw`t`09M&EWQ-H-F2 zr6qKDnW%v!HVa38;GJ_&|0-r%-goeRPs_$JH&M7|j!gtSG_FzykdW~stZ=abSU-M9 zSz2a|Zf(UJPJJZR{TFcMRwoJ^~0 z_UEuuW+JvJSzX}Mh<}!7skSE=7M%=4%WgO(C)dSlX{K_Dzl70vcp9bqg*8MxlsSMp z)MYUXvV_dycul|mL-^;q*gfaMDu<1YLPh(3{ zDacw)s3^zLI1O#yFOHjht86rGV)(?vrvcV|$gXp;!plxzg1e2;Ssp@{m%e>U(19&d zG96+YOpz|@iRsN;CB|i_I*TDm^gaCk`9uX8L&pU%{6_K!5;{O209RG;6a^UXvv3N? zLEu6aLIag+siD5wIjV3JHcHDa=(l0wlSHZobLTLZMZvA4GT26e*za6(i#*s2)B zqR5L}67~92)34{f7#$uSBA$3%X8V~AA_*N6Cs7pXnudyI?9*0@;$W2JEUIu&2^^Yc z`-NUCa9HIUKHe2D2qu3x$s*nM^%^qy{Yo724wF%21V#xNnf%-yf$U|j#wnF$@KG_%ft5L^v3udPqAMmJg%?3{)H$vCg=6u=mu;-rMhKxxi(g1%e)9BnU*E z)or!}TD1dJI!*C=rqBqNQ%577xm{?|qDg!|Quu8=$*edzTdilp+A;8OWT3de3u&Ep zob?7$M`Vf=^#8_YU$!YnriZ!D|6xoP9{3nrI5IJT##|cKk{MNgaaew%wrC#l3IzJ_ z)O{6Df`o;HIK?i3a{l)lZg(r9uNhnKq2Q~2uP5Z_@V0YglCkZNyP6|r@I7|jSyM^u z0*A=RyC=^#3HP6GS9KEC%H1CEqj-TeKcK_7(!WK}Qs$Y04IcuJw#4X2jCHOFRkx+(tVVOdJS!lFRoM*v##E_nR2!w+pxFYKan|U71C8@;ZofyK;ZVG<7mhGoI9xUvo+swkEfH)10{-eIqG%fh zIc@cuv6E-#RepylXSpRM^X?d0t>#=gTy91}@mYV-`xQnbC8#l6ohHv4ev>IzIq3oq z{fRxkPE8Z$?~T#{I#V6J2@M}<)A7zRJ9H045)ddDc?NV zu)*bW_++0p-1vvz40qj>?ggt7w3%P_&ved1r`uM1Wr4LNk%A!P)U!-U$T4^-@Z3st zGt2M>e=;1HoJb|h`}htI2$f#b2Uh|Q!0W z{+W4=7*Osl<9;t=Var5e=`}zU+i-WOLfwwc8%84(;5=76QK^BCfju=fQ)kRScQAHx zKY$`{ucIgDN{1pL1JmYuCiHUnYx0(~96t7&Mu@<)qtm$a@wVZs_u^4vmQPT&=P9L> zqz(NDbR~^ky0Fk-MGXs%w7YSfQzSY*x8vo2 zsu}_{i*EapKCW)7Bf1&3_PT>ff8**r(I^zl$hhF0essy$7Bp)0+P|Pua73uYBqXVC z$OF@Qt&V9%#=8^@6#lfCBVexKQtc=*DZsjAALcJbFV!V@KVVppVlF@8DZ*d_#B^~< zWT`2MA~MZ|vUZ5US-H8x(CPI=D%7gX6b(jV;PAXrps{i|osxqJe=t7)v5}RDHlsmD zAWCn_cmxdB8k*4QR^Gjtt-=ZK7EBnFoMqseekBQD(CZWe+m)_CpelMoV2?C8&`1#I zfJo9G2Un{v&<{#BIst7u- zRSZ&(p*U+f7>Tw%LyV4qDdx}pLtk>A2Sv>bULux5O&-P?61g?ex8A)Sq;9lXbLDb5C6Omm<}SW|5R$>G#?5Z;=`pwjB-82- zjiPU~IXh9{_b{ES-TlWLaimHqLa!R8LFsKLm&@Gd;9#+bn9V0v*eE*~9Fjyn;`FCR zOHNX)P;+3T6cNc2!^gssfrJb}NJ||Vk@!V#wg(Iu;#$5=tW|q*5v{Nm1J)pKTwtU^ zr=66Ps3T+SdPV2{*lbWi`JgOqguLk{;R@~w%iR5>OYd|Vmc!yRYFQTdV$6L}_ufWs z0%slitG8i1Hzcd`gq#X2I+Om9qjTt9s}hw9aoC(Rpg>VBS*I;q@yKa0TTid<~Y8x(qKLq+=2!P36-Rgq*D{pu8wAS?|`ny9?`wl zWJAB|^%J%^h#(LCE!Ep&7Lq9^C1qGrx)!T15(DXYDw#W>CZ0qQWr-{}fw|h$u?RoW zs>5`j*7`x2+)<}`yvv5floenrM!Sc?@xD7Snqd91gqb1qzH2+*wtAv$Kw& zLcrbFm#%zs!yez74(-oHsE59&izJZDGzUdzZ*k8hW;YwxK>rr13ZKlSuTSh&s4+~z zwFoqnh6DIy2np~W(a3Fc4fC5PAoZtj3T`o|mSIF+$C}1`kWkxp>XP>zM>dTo1Q_(> zPh_w;aMl8m-nxSl^(nnw%v)Mr3f=t6TxFf2mlsIukTq&3M-5&B$w;doXXl%)OU~F{ zPSr9DozWb6uqZ37MocOMFw<3g!^&DI(P*}OU8Xq0C3)~ToSwlzLC6!C{a@IZ^?ZZW zDl&aa+=Y=5i*ni4gmjnVE6kaBTo&5|=wQQ)fy;s@$fnDYak$!l=ahOsb^IeXXGx{g zM*2))jm&OYlPMX&U~yklZ8zHMAQ7AL+5yJ>zMO(32>X#hxc>TA=^`@=4%{NrvnZ_` z61xPze;}ZvtVxktIS-iNPJuz;hqDpOK`}-%GS28ABLDe@&K~{pqhX`d>#1Q70URFn z-7rG9B>H*eY^h{Bv8Eb!?4gucuSdkJpD1`z4|$(>TD;W+t_V?_jb~vy5>1 z!C{77F+!FS)dM$4V(uxJUO0v@lcegCE~&*I%bNa3`kBQK?~hVtEpox~$r#hOsT;rCPUkHV3Iu)T(}zkA4)+xe6$xEV6Nhez#PHd_pOR|(7D zGihu76%}_$_9p&!OoParr4|t)p)sP`7YP}|(6J+dw)~xLX{=tTCjg!yd8MVP#yB$` z{gI)wmZi)7>1Cx0k(%tg<^sFhuW4K^-+@#ZW{)>uFhs%C#yR{bKn;sbVqZa}#zT$L z>D`c~_8mE_Tl_cHU^O|?r%&KdA{58n0m(-5C)+kx)*vALf$rO5a*Sjr0rei2&->_b zqAz9EnWHN(84Mm!9b%eazEbu$DsiAB@Aw(sb&ON^|JjDGB6(sWrX= z@ILO7N$#s{jusU1=s7#zI}%a=CS$oP0iMOhDywyq%_2>tlCpBz`3H^JJphCD@MniU z?=6frmBOs$<2h#_hfcoNWmg!GR4}%0H#$q|yz>ruI#bbJ!0d{8kzC4ZZ&t{g4YQa# z#h)|CsytKe^?0Hv1+oa*bi&4JWx2GE?(=_K08w4^Z264a%zUpsSK?2KhlcuQ;;yha ztZp|I#nS#~frn4wiXU}e5?umj6xwYApBV48v17Kp@7Mi0ZZ4ye~S`I`)j*^I?zt1Aed zPN%0@e&nwJ4Jsog63pc~=Wp(}Xk6vidGz&CY0(ly2nYyA^Oz&(wQ=U7nFVU~Qc(ap zjj8|74rcQyrW@u6pNEr;FkNMQ?`J}ctuB}KDpV}I{P`g3Ujup3dF&??>}xU@^m@5h zhknD!zkx7DjMjd00$ew7^saUt)=Ci-m*2RmSe1dzl-%)a#f>{^E8mWp9=~OWv?V{ zWE4o?Kz83W<3#>DdG&sIlBK5JJZJ)<;%-7emnpOxb38xI# zDoN-HF=LFUDui-J$@x+c^(Rx-W<~Q_yR9%`^2pXJFL6< zOW>@Rg@*;P0`EqlV_I4*jUVw~w?B(KCk@=z3m}DGDXtyQ8$cHc&~A<@0oKds{^0@D z9Sp`paDtmclsr5la9|@u4UW*8z`N-g`{@lRYNyQZ4#nryqN$+qn#+jf&}+cu`D-u?WKNd0TH;FhD3DR8 z3lQ2c9BR$Gzs#&lYsCL9nk;#(%WkY zWI+Uk&6l%+vapW2{`2mOyGXQMW&~em+v*R}o7%}}DB?RjZlr2^Klwf${`f^k3>hH} zOZ(&~mP9S#;IvKgzC9i!Qo==e-*fvgD;vPd(q!AwN>{80*rP6l6@CNu zd1ZwKkBBO0C>2ciDG@;y<^z(w(FE+S4zqfplV}bnxs)8FJ+A>T@7g19(m6H^213FE zXV)45JFE?1gNpJW(4p(6m9GglsDOaaEdlXhYBcc+SJx- z!Hzj$gIDN}_hLWx4Dsw(3Cz7v9=&d#7<7D%S_8@6ua6sTc9a*;wJ?%7GzEB+JIMMM z_{U<}R)rG4XUSy!1@*n-(f+)vu!N1SX48E{4K4}hh|#Rl+H(I{bQ!kH_+F4h3PdWY)}{L z-G(xZ@OQct`_!;p}r>B{q{3E2>DAKV}_6j`JB5pBQYyaOR}Jx_Q7*&|4Zfr zzwocW2gF}8z21MVbUNJ`;gNu$Wm#E(0CIKiBxtDI4K0h?hRXb()#b2gLr~nDrcP7^ zST*ISuN^iV+UI|f=+2w2Z-9)%Ypun{renp z#C5oy4P++r6~)%*^~inszU?gfiTL69U3l7u#hT)Hhw`8QvES~B7$nUnh82_@@yzeg z6ny1V!}{;FgkhTEbD-z56A`OSE>?(9&{!LrN69 zLu5a$T6dWE&QPxNm7k1;!V@0j{Qd>O+ZH8bO9YGVl4v#<4N_UJ)DT_)x`g}9wm7oT zD}2oc9s!z+Q~h0&`4gV|#&G0JmqUY7bm~t6hI86yVqDZQ`(I_P;(+`*EJvaYdTHEi zvs|+$fo_UBGBn(vktcUZFbnum^Oln?iC#wq8X!g9AjC`Dbj+m)zTIcE=OSz%G~Dz~7PQ-*3L+fGV8QaAwQFZ=(?33(v>|1#{Tt zNy!}nVZK4j(zho=R<9?$dC)iv*Tc{drTcl^hG7$)brzU_-qr$4VG_x1eMnKkM0m`N zL{TxLkZHbvPd>|Xa5zSe0dfD{HIlA#JHO3@=bQTECu>pId)Qc*M2i3Xw=-;J_6U4d zV;;(8#lwqmP)$lNqVG%;=g5v3SlTqgt^UBkUAR!H*yv91cnE^R%>W%wW7MSU$|$Mf z8#_xa@O)~L&Cu$!Ks91cNQxS>>y=ca4jUI#feV$x6J9SC2~x3XW>&`ayk1`6jmjJ) zS4}^5S9hPdj+S* za~P;ZXX(usmlWVP>o~c?EJd}drx*=-qXUy%xxWJ}#fP41_;1L9wnszeQZkbln`%nG zO_Hp)7S(kqkUJdUtt`tJcaDdA{6l#OwSZcW17x`3y}_`?bnXE=omP9``7h)l2gEq# zg!Hdb#mcAw3Q+3-C}Pl$4?fY_9}uYCuXh_4KLJ;lx62Ekb&iE8!dJxE@7P*8D%^ihnSS z;-lFBhEb+k(t$gWtic*?^ zzz#!}1&pyJ-Fz`r-!tWdlt{&f`_+lAr7*hDMuN|A!<~HIj?%R00*%}ng?3{gepfhV zp!lzf!eGl1hwSr0X^%}!(R!dC3$SNMcq84uoV=VKPvxW@cuDayv&UDELG5;Q)!amd z4v`Llj@iQ`iEPp}9sL9U$Lr3yqbD+v%3L%U7UwvxS{V!xtN-MYamve(4dyswtV^w! zyX)ERezoDG65J{dupDuN0LC!##HQ?AUa!5c$L>rIt~DhhK++%h0tD+^MlMfIW^P_< z=tK%ZD##7v`|9)Efy>`s&oCE|WMBQMqV24)BB7|@4k9uwa;yhx0~K4aYQN^M20Q*# zrPm%9ihwJ|y(5qgTQFC4VCUZ_>Vsv<)W_^!DK^v^eIQyn+w*U6Otnh+=|uP|!;CdK zsklUx#TtwoyjNY%8)4Kz-Y*Dx3rO7Zj1ndxcvzCH;F^sYicVudASJQ4L6L?=N1u{? zLk$Bvu*Dv7$C^@*wfa5<***w=)qzp)AVOA}*mj}4a`vC+HkM~39!5)n z^f!j&>nCLb=)5Q}$R2nCQ*Ot-gL+$Kh|iTARHvCT+-1NIfj%rROpt-@M8h=?KWqs=gA|D8;k79@aqAoBB~s36LM)Ip8~4kc4@8t@=W&(1#>L zL4~F=>PCW+NTLW7<23VQGwIdqO>SNvSJTtn`BVA@Raq72>?Y4>P?gin1tAfwI|FE| zllG4FPHaf+;P?^!tFK`w5#dU9P-((a=U-gv!1bTQA$iNacj5&z<%zpD`Ww9OtRX4 zXv#yy+xY@o(fLv%^~Lz!x(&N_Gdc@Ox8B@|QDCUELEU|MF!IDBR2uqy;E4)wHGqN+ zToYYp0vu7MwEwLBLCE?buJq3Iuwe+TaFC&ZY=Az5v}N@UJ^BwZK&L&;qoknWlW}sI z>*~=ksiyL;LQ5 ziVzs^2>7jch9@|$<6;Lt1u8??!Z)FBevL8>ju880j|HLQ;U=hp(o?_;q&!v@I&{i- zyWg14hknx2(?{J;y)D+8NXuPh4@)&X!A)d+*%wim( zEiczo9o2jo03{-gRGb=Kle)@Dr=diHWzd5z1AJ2-iy>%9-WyRN)00`Vwd~wC_!^I6Ln#`M ziPg++JhIFWMkVa3tVVIESsS}`xH-)EL${mjRfzq8Pt&&TtR!pom*(7Enf*plSnOPE z4*wcl^n-XQ(gcAiE7OlI0CRl>+xoS#wOHxFOk~)6I~?0PZ8+1kDCBUtZ_u20-Scj5 zy-?G-ThNK&~s8eU`D z5SWsU=7<6vdanVZyDD;dIfhK)a~8PxpAEcRqd$T!#VJq(_$Qm@A~Z`v8nEE3JgzIR zj_0ply-tEzssGx7YjKTr-|$dVyaX5BGo3i2e05r5sS!*@qKl8`rZ$Hm`Kn@#0f2b2 z@ISut$Jf0k91yo7u04ai(*EB!A7+Dl$7gU&!qKSNZezI~;Tmei6xxBx*`KV`DH52( z2G_h9b*UP9v~RY%dHcbcklWxyS7-h93HdBh@{W>!X2s~|tJe=%>9)&lk9R~@&?i-{ zt|n)E07n~75{`zdsRxmePdNNkknKG}FVdKvSp-2WzWs^K3LCIpGokmPk|bW6vZn8? z5j*|2(LD+t)x5`SC}ASZiGYxiH{)=%*#$A4c6}mTFT| zx;)0`I&-AJ_NZ)Uw2V9c>@EKy-o)uR_J3B%MQb<^y{{a!3k{X~gq*~)Yth|gY z-YFtcMmm8jh2ToPoWe|`nmqEy%BSU`U z$z$2_1IuJ9IN}Z+zUTGp)s__(=o%Y>-YFwim+nR zNZk3HIpF-awf7_Cn{Tt-O3C9h{@;Iepg=Sg4$?3Uy$?V(f`HMf<`#<#n&cMe z&JT#0{x+LPy|YDSb(p0N!pF@hyxz4IT_`MwAYd?b>^A7~i~#0C#?dq0mf(IwAY_Hj z6am>MX3ljIB|4RnzJi%R51+E0$G^omm5<04ND*&UD_;a$S<(O=x)5J`O|C2LQZy&u z{PSw9I}YfOvdLs~qr(;o<3!RxF*O6*wtNFId<(2ZHH9V$GLep(dnV+kq2|v55Kmf# zCYv?}(%Stx3adRM3B^t%82t77luFIELPZU8ENUo`Ue;!l$cH!2NJV`~?dF=yI3@|C z<#xy&=EF~~;YVD9H@*L&r6heO(BCkP2jh)AcYeJ%?bdcw33&1)01?M2BFR#iEi%(A zF=$xkZ0bX>@d~xQ$Oa~9eU$e5=hAD{PA138)fss;*f#e(p0n+^O z^S~e0zYL7uFo{_lT* zF{`J{C0li3N-CysTJZk4D2x6*d+d|5o0Hl{&FksvhLax)5#PL4mSQfH8MQHSFhp4E zS(9hk0%j01TE9U$g;1a#M)Fi{q2Fk$=g~V&r>hmM588aaAtMs=5jffN3xV0>}P0cI2>LGCtwKYq2i=M(k=~u6TgQ-!-*G#Wu%ZSt#*ge(~sx5 z^7dk8oXx0w0oQQ{((?s}hsy*Gjkaql08)35Z&d~M@|!G8g?+0%GPbtspc>2e`9Y$p zndezaRPn@@HEpI++cnujixLlwtGEgOx1NtYm0YiD1`y_(o)H8)LHrqHBG~%!hZCna zAOJ_WC{7t=PK;?S^j@)!qA+m$ddr)4k5UXJOSwpD4v>*ER?A?uZ6ih=`pg4 z2~#ESFKh9HZ`9X%v%L2%PA1%(SR<^6wz7NvITHb8G%5mLgkEMp{C@nAvrDI~F~N%^FBKyM#a5_cql{{|S{DFxoc4XBAbkn6AvMDRuZeR)5~u%a27m?@ z)tOvD-UsEZYXek)n9*`rL{Gr%$=UXKP>76-ECgZ`=@EJo5)##^xDRy;%$)HgQcRr9 z;4|ykj5;#Fyj{Fpy%BzaGQ8ivL<@c_3xj}QZi6nT5JTV;b)rLc4vTXePa0l$R91Ee zzyUzr76D?970cV}E1#*&c-Ib*`&G11#lNsv1R*%qpejV`q3QSgZ#8T1UQt|Ey7WxL zaC&)5WX{bInU_D@BqQCx0>R28ec9-? zb?*DFr%aK8A;J1jrgPCvIkOqr5QVfjN#gIqomHfMfRCt>8Z>Cu8YHJL%9Xol&uvZ) z)(|o{kvK{5NASBJTKs&!YF*9z1nU2l-;*)mklsX$bbPrgj=jTc=hMNHzN_Km2kU@$$Tmaa-7o&EY1B==cje^g+M# zf38C)BC?9nDFZz)#fyuvfu~rrmuoI?cjHmL@|E)lL{P*KitaZnof6M8F z5CacSsz)di6KM>;iqn!o|6MZ*O89Giy?2W*%vN7ZCm~}4$o{W4d7*og>9knNlJ=8s zn6`+aOIlhklX>&n+YO+cvG4-{Av(jj!SfdNI(Tc63Rd3hs*v^G zsK`iK#%gf4tM!;Idm2oL7kL?({G!or=|M^;X$NEj2dt%R#cYB8{o*Ut4R5_F&F+5R z>9Q^O{!_q=tSg>hS&7p>OZoUE9$>P62@6+OSFngVx(Q63X8=rDuwvjH)2Q;xN5R(U zaKqt*uHsw06l-P*sSc;tlIhnvSGA^rBWDA>3Q7wLt1`axgHjDxqwARp91>i~YMW<| zqV;BWO~FRUMbAVB*K;o_eb`~xTBC3&EIQR~+tC@U0x%pe@-Et{4aFD4Qp5b0WAu_b ziBW>z7Uw&vA0aZK-vru(v{i%UWxMhHoevc1ax+`+@9~8L4FyN?-s*xf5&y&j3y)z|;T-fa_yx1jlx~$Kmzf5yX#6P4_ReYU&!@wCfyN z9-iRJJQ0NUUvm7#8GUQ>vp-$bDH>sNWMnMIUWkvD;_Hf5a0NQ-z5(ColUbXF^mrKm zODU}blH`jR5N%yR$R8OX^MqA%?7Cl zurmM*vk<%aNxON}<+*({-zX1it(?p6k(-#bh_rKn)T9^?fmhL(Z*boi08T4FR(MZ= zWO+IPIguec0{-+AjZZ)Y{6MIb*0xc^K^re<(vLU$#q;Z4QDyN;LaNic`9D!D(LJ}w zym*S%C_`_FcNxUNS698P9Rx4VC?e5{!Me-^;yxQ1HyNZ@6Z z0uNIu)&}sj{%d418IE=u`cr^@%S9qc7=ifTM)>__Yldu{KcYa`uv>L8PQnNG78qgK zGrsnt+N;KnV;T<4U5JvDf}#Jt9vF|K&$Jb`R0c#)DEc5|15=rtMR7uZ!%ZfYXhEF@ z<4ctLQ@WK2&VwoU*%njbJ5bLgTL;zpr0%va-eT z@hq`vjarLsVlNxzPv10i>O-JC{;d03;86(aBIp1Q9jC88ul^W6gF3&*n%&H>8vHyun&9Uf8ecgc1}7Xu+eUf1A~nRG?KV=f9K@(F*Z95J%e|1#4~K4%M^ ziUOJNHvKPyCzLJpx&xg(mzSJ8G%(6tv!k?svFo62rmz{JzWB^pzW;pdbL#dJ%R@>y z_?!@@@gMkyz_YK{$)fB)CCJJIKy7>W1z|#^Ac8oMve)4RI5Oz9#XsGkphIKc-pgc3 ztcl=~gJ{H}l1tV7{Q~yXpPuCT9p4BFy@Adm90Pqj*;_3hJ3?XbHLT!ID%$^=XU8=( znqXz(n-r|XLm+b{=E7pFTaN@C<(kNsKl>a04?hzd$2eCPxsO_43 z((!xUgB~B7+Kb0?ID2r)KFy0aLPkAg)NS>A>`o(3kjMc~r5l_v;}{D_HY>XfdGa#2=GmadcG01#Kvl_?^J)k zS9^tiRd#kpLtn(C*Wh~GTg0zgHePS^=Ehdf4XrWnt87ivRmU zEP_rgZ-}ZDfj2Nsmp{gJm10E@8K|{CS;Zht)vNJJSiDi8^Sz}ez*83G69{oy?+Aq? zJ=R!kS)LBk{TIq#0|1%S{C#4j8ZPl=c#A>= zH-Ho@@Bg27yFVLVJ2p0ER!J~QVDLm5tf%_hU4HL1-()0~kNMUROh-RI5*)MF8t?XZ0i^()D=3djgZS zsMu2J14da2(W2H$@}j%^F-t}(2By~``>tr(_riJ+U2aTF0)QlG&!roUYXup@4PO(p z5C-~#Qq`4A-qkGiOA;s5iO)F^;NkbIiK&v1H3LMv2#V$&G8FA=GPYQ%(R5i!d7Snk zG9dTDmX>uRbiDk(HPn?UWuM57k4+a~G3wA9Rk`L$Z)O`h6lP@L%G0pbn9Tw)1)$g! z3?1%XWdE#rmKM)>)Nt4vpeD}(-fv3(Q)0f`aK|^V4pXcj?3z1=971blD@dWk6MK^h z1dnRdd#WQL3K})EcgZO=KUgC8QzF?*BUpRB8pTwWCC5PD=C((#iR68+*cn9a@Pup(Qr$N{P<$6Kk9J;A@nT4Rmb^U3RpU=cWD#vsI7jE+wh z7D>k`1(Jv|>J0Ixdff*@Q6QqR0$jW?_gdZFAPwX|bK=kv;pGqv8`B5$Q;|^j`)nem z@O(%VHqY=pzP~JgfgdDba0I595}>oCew3+UTYxZLCBSAWJL@}aHCmar8hdV^ zqq~KP?dQIb8BF2Nw|lcxv)NcOB7AVO8+SvqomLNGL|nH2a-hZW+_L-U1SI0&%ooD> z!p&9-p+Y%tv`0tBybWRMHcSj|x#8irFz%?OKja)v#hs#{$eh)i+uDEhiopXbz}#$b z=?VW9ZfP#;PuA&?a$WELiE;%RCS3wSfC4yx{BzpFNn`#oJ%&44BB7wAb#SX9n{9(t zF)3G#*W^Ycx|#ncl+D2ZuV;QE@L0o-6q6VzD*W^}?ivv-UECaEM}|*1cbE7BZU1cM zp=JP^74l!}NHRUtKH#LJW+zPX=L(QL!z!rO#($SbP|`QO&F|M^$6n?E{w@@2 z<0ME~h@X*>(fhZssV+KAN)gh7?1id-4dYN2-O&fnPj0?N$VOnG;D`Vx?#T^`!x^nJ zf9~vLMNFcNo%*)j_=M2a_jgH`OlJ` zf#P~of6ul|#%c#tphhzrfEuF2!kDtVgSjsOtt$4k+vtd?(eECQ_=b}%N;b*+hWN`f zhM36?WDj~aRsiUytJZxDq_~%`T0|WY<95|j$Mn;Bhf;dUj z-yCXitU;juz){pA`Bc<4J37n;N9#&-c5ZHpcMRhEwk{EDHs`_B^=v)me$vbJag{ah z%E$(+2|XO7wAii0m^s`9tbM!Ff;mm+zt%2Fie}eMY>kVwGEM(~zuiwota27C*BXvo zpYA!QZ-*sdaI14uNrK1VC}c^#2cfyf&x&b*ZVqrEvygyouS~6Sgk9G4^mkb~`9R;R zO)`@eXU%EtugY0a`@T+VJbY7aB_$eE4Ty^9+hj!RH;{j#SwHlP^M5zX!-K z9h+POq76LuglAhQs4;r~k;ppm{huDA0wl&_)zaz6jQW$THnPG28Wxgxx8-{mc!HhA z$B-~Ym8E__ZYlrO`HVq8aOfD}O~A48wjbh3m0Ryr&tZGY5{8vNRbfgYx5zw875~p) zUPC~KMy(gFBsqFe<*UUs zH$UUV4Ew`+dNa0m^AJ|!NTD~v8JN%y`JtAA!k84gj4?e-*|dD*sq zf12C0t^1QNagnM(UT2yFqG2!g`UPwh*6TvWf~HSZuihg`?4 z>*Kdi2jyReBJ$3P{km!FFRjMMCQit@j0#Y*IY+gPjN^Y@Gy4@cP3O>3@WeaU{3!4u z{)+4QJck#ZLoj7xn|%T!)Q<5zr1sU+GcEYB?S-w7<46w~hDXaI!C~OEKB37_c;lxf zk~VFzz-R)MFh?EVNROJNn!}T=cY!$G?H-}QtRs=<^AA#gl!iiF22G*sN~mSrYbd0K zTErLh&Jo61vIz3**>oO!VgnEzMxBIJOXQtoVq_MDHwDQ?u`V`RnX3)DGu^M&2W3&X zJpQx@LO8aQOdz6B)ENNu&nOzb0emM5-S(bJpq#LE;$P@DuV@=tc!VAvuHD4bC?o-))J{%AS!k{zyk`B_44wO3i&Q;2xMt@Luz$=@YYCJ#DMEq7`pd%4tzJrY3vnMD~Y zV0gce>Z;nk6Q?VZ+Cr&H+sCWVxG(KP%M5;HQH}yiO?i5!!{`O{rIbF zszvd1x?H_@x61LNbHYaNNOo7Z8sX;^(vDGvTmmWV^Ud$;;-Sf8f{D-Pkpwd`tYq}~ zv%ft0&x~$BB#aCWK+C>NCCL)_BM8JRU^vD_Mp~Z^COCq!=_=o#H2+k-X8bOHE6~2S zs~r~`QSx7T7eV@z7el(u?JY`BLp~M9;c@!eV+K3H1|VDv-_!AlLkio2lsS3VUE8kK z<-)hb5prv2pXRjYAK%u|k;xwq9YP(v`hSAc$jLAJ3jOTTNDRuCge3<{L|d-Wt<-u0 z$w|!Lz2@`b{^E}<@VS0F(OS%!aE%xt8X3(M<}Zhs8)^&vJ75W)3x@1j<#u>cJDD5F zzy${fTY^N$zo5XqBj4-2ubE9}xbQwZ%BpI0T8COC4b@VGjT(JU9S7*W2wG(%iKimKULSZ zg6DZNg1qEc+M)E2X=At?!=r2_Ju+^Oae7px#&_H3<@dSpV@6N_aI9_IFB z8W)QcX71cImmV#;9}KI_pxfJJ06~v6*Rs8Wq3RhvC(i+EML6eaMMNG@ONbmz9A=Q?u4-^Yw*+5B4k5va;HoN;E2MR?+t{QPA{0EY9{&YuXS17DscF$ zIs(i-?bS|iZ}4P$|E0EzMm_iyyka@onwhvkdi(R61~1OEE!UA*stxg zYM&uW95&rK@;zb*wBb3L#L$_wD7%Kg>ZMbcXa!*Glt911s2q~x_&7R}yrn{M$8OOt(j?HQn{} zB0j1BiqAxY#5YX%;m_9{ojpVfu^Lz~h{F&r*)!-e42}MVv&&+I21(Kn@v@Y&!Z|{A` zvZQvBCL{O??}-!>HIH6rs;GXaE2H-Dbl_^=h%S8{@kY$WT4am1_DH!P^t(mX&1=m0 zbk$V)h;t2PMQ|UY5U@DFzzqnZjXOLt^;W}O{tV5T=bHn%k3NOHS3q$z-OOe^QN270qcmw;sn-Cm|$SNU;Mz;<#69b`%hfGJHDcyicScp7Sq63iIi@zpva^ z{Fo|=lS@n-QV(?$LgQOVWNP*R4~~U(+GIu|7ILoWc2dj_kerry-Ma5TuMOIo31Nn^ zmmJHS1WBo^qYpGy^$Y5=Z5}vI;|ND#x$8qtlE{SbO@_Y5qyGNcP1#r8rhH4ZPKXfM z+-DS{9+w`PV-9P%>Q(M1UYSdtZzkPyLMhFu*DTa(Gz(vmv{)*a{XJi8D3*|aw)Thc z6@$}WK79Hzk9h?fd*YoBV@Ac#!kXBOKSI%XoGR9JKw4BXcL6hcq0l>J-Aw)`3v4!1 zhiWNUqKf%Xw&#?ga5e)Z2ahY zs%zviIO3F}CA9BSiA)ewV84cjz;upOR8%Ngi<^EkRMnThw9Ae~oC42_<+8%(DLI4o zV>1Q8-Mh@Rf#;r=?RJy4So^wZ3s*GnHM$=zx;c%OViBZdy_-Lk1e58Y16fHlii&$8 zH>cJlfl)Jw41!>@wUP6n!ndrSo@F%EWra)lm3$OJGz^#OGhP@OuI~c(w{ytnEs(Go z*lzpV%@sSodeF3)sGq5u_D&twqN%?ZXC*KF~t<)2pd}etMPv0I=?Y^nfiVC@nVCeHCCq!cv z|BPa3HVisxG&>q5CRAe#2}`98DMMmry*d#Xj^=BK6Z(iCBYkH|ig5h9gE8U*ODnJL zeiDv$a)c&5`}HNPgupi$u?$*m&#|@7 z>t0xcJqyRhmB)7K>k_f>tx40EHDS}gjJp)-ah($H6f_F+7Q5dCunF*CqP;kI$NW=H zltR~i#M~s#Sah$N12@&+l>v)yZ?|w~4S(A+@W&ZU?l z4V#4s(aaTU7g*@fKOawx1j;v>S{6IIWn{SQwu5C`713<=-{kjU@ZML-nx1Dq1VT0h%{Zd4 z9l?9msvo@wkh&hm2dRXs^}pgWahjvI)hcD0eVA7;I;^*r>^wP{2&u3AQ=sK^!0fDn>7 zD38|yYssal2qW)yE)ctuExB87wvlX*nXrLDAUF5CKlHoZ4WQyy-`acfGz%hkvd>!3 z5H_aBlJ2vv{QQnKVp{q6@fV-TZp-z0nhVT?QBV-ERWUOdlRnktU|4}hSch~bE7LL< z_pw(BHG}%|9roY96X6lY)=jc71n*!9XZfqP)i#`HTz(A-VJ&P;L%E%7P#!2P}~pu34Ol)u&FJs&dhT7_zef}$<9ejwxmt^+OS-hk>yy|c2F6M4<9IgvvGwX=dj zmAdhgsdI(o5c9RP3@UVH6y$4OI=BnN=P`#!Wt8>S|-iubY8Gz zj^dC9QyfT5OQXk9C&II&}=|?$(i1M=Mcj0|$svjfjUoauF=; zggtljJqyH|vB*IcX2PHFD{X6-u-(f-9Qy-P>fS1LHKlcXjlFy&J8&kju#;eZNWz|v z!%-Y%@4^l`up-!28|#$?-ECW`mKqNJRkuvS3j;S>I>pJuqB;80nQvdw43+X{XFunA zZ=X%AUvR>T{D1;FHL2*GKxZvfS~QOT%~dzn27GTw_KDNHR z-E!)Y6*`v!o-lV_gtxMvp`71Z4QEE>4qe?Pt4e3EcyPV8 z0D>+JJy$l;l|tIpAqy5adzI>JrM|ipEUC<^TZs2HJ^x$ciaAzpoJM+Un08f~0UrWD z>vS=^>R5k;XrWoc)~#!$lxRdY+c^3NQO+i|SsG^-=aYDdGXSm6FU^hdJ z#Nbmygx(p;%0?vE8#f005)EYCD}g23l%D{~%*hkd(`izd{zPHR`mB1`_Y!LAdmXV> zfK7En%@Yz?!`f~LLF3}$CI%1N9vmkBX8(-cCoe{0Z};2r-E=%a937iR%~3_uCj$BV zd=>ONxsKJj&|c}G)1*{U>vc)Y^=1y7mzURg$)wAeqC)Ug^l*S*;dgS((!jX71nYL3 zaBj3i&nE$_JF^d#Zv1GybWS~!CtZ|LEtAAvkIIvpAnX25xmxbd)U3@JqOb4&st$MV zMjdhoRA-o_s@ETbN9z%dlOOUs9zisGpNda4pDwtyGRbRQ;z#&cO?O3pcc9f<>w_-; zYDE0iiH!MBm*YBn5;Ax#Qpm3ScxQbf(Doe3Dr1JYJPsail@iWM&NLAH7FFjk1B#Kh zRN=`H|MWxUp6gH zkr+43k%<_(O;bkh886D>{x zjz!n2yx$YQnFcLKd*<5Ajt^ChSJw2vxg1aa$zF3|+1&?4<9->&cL>+LYnYINXpD1P zSf04ai7Q_y87F1rYGL2spJAv;Hg#_#3vCht_M*wwc&rwSoAr{_n?2DCpM)gxqD>w| z>Q}bsq@<8ywZk>%vB(bKiSiXgCnRNP%ozWQ)YC3D(kINc&86Sv&}FcJ z)s#aP5K1i5QN#8P%KNy7t=ep4wKQWDPN5WKSj>9RxP$SJ@_teK!SU_Gu30A9lTu*1 zX8T5D>{=}OW+MzkGxBw)S}2dq{vSo^LAdBf6O!dlg zyZ#WJLI5^w?7X7yJj5XVC$GwS*LexO@%&uJi8VDjS&jg!SWO0}VX=JkC=&a^#LFur zyQ`Bd>)kuH7+K7yfcOO*PVn>T=w-!&) zsYG?}&qlId#4)L!pEHq??x0l&Jo(6s)S1lX#6JMP7L#?2$0lE`O9G)cXStU1W!CKN z%x)l583@ftD7Lo8#>6h@sK9d%6Ay>pLq4r_R`eLXB1z?umX98CxSMOX(S+p&2uZI0 zqA8c_?cW|}qP6PP4kkVnEX=u}V`;OB*B5H^`AN0;J+36&6{pM5+eIcRRs<8ZsHF7S zeFg5{pHkD#c#Y@hW{t#)Z*Z{w)a9e5A*S`3>6<=h=|>8!w9+@l1dkk`eBY+9+cI(Dh zb5x$AFB?oMSv0!?s5_v|c{`t@jD;Y($yY+gS= znu`+tdh;9l^*bko-VoZ57RY0IkhY44$+Vsx8nFhRfNx;w!78FY9qk3zr(|#V)YWxK zCm;^@IU%r?M`O`Xzad?16dBqCEMmFQgGm%hMG@TSR%)CX7ns1(>Fi6z;V}ls%|y7R zys!_1!A0SwVOc`2G;DW26gTC>%_Enl-}YS?e@$j{svd>rI&kB%93x?jAAuLa&?lD0LvQQS%}MlbmY!te$}7|FZzG z0Qs(|?~#GwWbxLNb8gyWyT%3!SNg30s3(t{rG(UP>Ad7d;NrFs8MYQGxo$I?XWai=Bt^8sAQ!^JgS_ZQxbCO54Dl0p*}KBd9l4X6BJHi)`vAZf(A)77=Tt?lv~g1zw8>1< ze{%^7|JVYR7RMzc}}Dy-Q#XkD|$$!s0d)mDHctP#2xL5U6F}V1-Y3z{oJs9Z?xC(TEab?cwYp#N$!Bq? zGuToX0lguTQc@)pZ0Q!*Nt5ZpVfpGnVG!=shKCaTk|74xb;?jOa8;NHZZ1Y@uSl`hJu$v1mlqt-N@a3Ck9wY5>=(>>!m!Zj1)UYf+NoL&xuR)ClQDhB}G+o>pQLI3PO_>i*7Qo`lpv4NI>>&e?AHR#X!(T>esgo?$*j**uhh;!@4Jzg3k;0yOjHjq!~K$?gdK!tcsUC1-C`9o0Pz zhR?_ppfEMXUvBw;e*T8~lV}rWc+Kr>TU#g(T2v>ZyfH01u?nd*MMlPX$uAe`B`t-{ zbF~DsQ-{zr$ZR?M_BmwBgi4H6F_F(nBEBs2tNM>)V2;2M}S?3Yu&39i(1I!(}>dWyOCv3eYTwgj2F2n z%WYTxkEU}BuB&ahaMRefZ8uID+qP||aT+v6V>h;)oY;-g*tYF_`*~-+-)H9hIWzm* z`(A5ZmtcIe=dXlCj3!i8gR7$%7k9&Za%Twe65XN}H2T0**8`B&1mch-BC&s25@BL1 zUAw_q_yU~;;!uTqYJMvy_?>3(1T!#AFo5*?%Mp29@w&&UeF;I!*tnhul5U_i$Zc5# zwX<_O6rVCSsrMskL`qs7mA;Fe3F%+4qNu!k48&W@BuwIEgcBQ5mDw6`dMC*A5J?%A z<@L;LWnMU5vtdpH{2QyCbb{Z>m0TweQ38)-TxOk($o&tIeE7^3yP?2fXmXtt6>zde z41O|^5d^{t4uCV^cKnKuic)<)N49W+t)Zu(Q4A%VwdQM4)hI0w`>Mxrb-K3H)g(d#!;bTaX*9Z5_GBjx>(F%=c9fM9wsAtWY=`9DFWnOzBj#6Sa(B0+ z^OfT~y0`z!oX(Fbt)Hh7GF&F(ile=XbF2CC`-|a zq+x@14L!Gi8ELRHwt&>#SQ@N&7NnJcjwU)sbKDkij{mL260C=7dgaEbWWi4WONcDT zVAwBj26TSVTnEHyNqcFvt9VJ*@X`_zD3)8-Do;+l!&emxbZLzqC>kztTKem7?3X}I zU${1$!0AT8Q}7WPIP-FofBW%S+~LHN&Q$qr=c>o}^sl(v4hH>AV@Tj0YG##?t=CXQ zoYaMv09>gdvO-N_7)gs%*_t|yr%Kk>FHaZ>+ zkI%~x(kMleLGCyl;OnL1bS7Jl*4AZ(EnzA9`RWSP>wN!&wDk1oa7!5c9 zLwr6tT27(B*Biv*O#64xJ(;6$g4k&)xkQwG0hrafnVfcEJyaZg)HV@E6+eaYKIXos zd^~sLO)Q=OUFk7ZGL6FZ*+B4DbJ-MT-B_RxEg!xdE`&hPoqEGHJ{ut4`7lCFEUVr3 z0say}X>&82NZApd2kYD2ic8KPuynB3ND*;r&UmIeZ=KjG=+2mgYCSi+$i1Q2uO0mJwxs0hCW_f)w2`^C;_4^wU`k%&(H9K{ z{qX181JVY0ssyyeJ&K{+^W%TT+dsNL1-_@>KxzausLNyIv`ThsccZae%Kg%f?}%+{ zizLz5djGN6=ji{+=iB5WFj|3Wtd_a6Lpi3bp%yyE)Ctqf+f?QQ4D=dI=Zn8GXg@)A zt=AjaEcz7?opL!0R1^BtxlAlBDH|nilvUIf__X|4DwJC7eEh)2V$jZo5yg~=B^UxF zk2xI~k-F6Dm>ys^T%`(rLjJR0>Is~6QPWT}8OtbIsy7B&=jRNWWe&51)Zh*u<3Vz= zvWZI~*_FCfQgb*`diwe?fE@^G^>1@#wR%HBIFh@Gv>F^-^A1pl?j4BWDi96RVx z=<#R$a_4a7CcE?i`fIajpjU{b8^qgLlH0fv782q~zM7orEkOi& zB#sVfr^XT-d*;xq^S@6=a!)j&K*|1a%VdOtejx5UkYBAcd8S+Qy@fC{Guu4`?&`6c zHW%g-~2}rZ6A+_S?RvxrDrcCB_y!@7mJAS@@$BT|D3|)zB^U`ZYvl*?rQ>M9`L*Q_$3s$ zrdK#@7v$>bSe|>UjS;Zd#TnuUJiDyE?k3|6d>qX;36E!Rzyo)tZUejRvCz)1d;7u!^SIHwNsYI*fiNh8grG(hYL+6ntr;<%m9g6s4C&T8#d8C)+j(EnFftOJNzJIQiqN^AaTz7J8BbbTaC-S7Y=fDJD#O0fnP-I zR*#5aDr1&S2SY`fH`Rt0sU(2&d%eeG6g=M+WkD*_YD%rPWtUZy^wq6JTnBk1#%zoW z%^}J;k;BriH6R0X(#pVEbiJ`glbArP@a=)yJU z8p(p)%yksQHz@iOlXs1Yg*5od|DZv)t#4=$bBS=}W7VG0atU><#o~W-oQdEU);Mqb zNu}EGA%nZwhvd9C55efv_;4kW(bquXOyC*XA=zYdh8fw5PZ3Mz>cJ6T^zg^ThNvl` zeV*i@O<<@Sw<1@HKwB`mUY|nQrbkQ5tyi~VL-iN4g@LCUYXqJEFmz31h z=IlECIeUHfT#mKb7x-ys%%%Ej_<+sSHGp5-t#0q4md1^CO7CsN>@vxFMcR^8$LlQD#ZsG7fZFq;2@9VNT3gaM@nF6 z+!{$a;b>-~JMeOAsKb<}QduL+aeK7Cc9C(}5*HT>)A;Y-uS3&PTBo=CVHS|-CiJL#HPASz~>%D~tL*@5JGwSs!@2J!94`p88Oa@>W47=p& zYl~j8rfO*8O^-yX$$BKkzG+Lc{LIvj@J&i4h;eg`g}0TG+bfpOobC$F8^(0g4HYDd zCR}Zas)XVSa5g-_>JNKqt$Ez=Lr!$arU-mOL-ylL8UG2%n0}onzlJ=c0*^tA=XovU zTyjCrkR)0#BlPX80%pT7R*=1mnNdJ~f!JAp!8_JI6KXeZtJ5q;Vzi+EHl9x)D{ap+ zCamasR(qV~UU&Y0qFP^kjvF@t9h0@JuYT~>`SX!#yK`}~Z$sdf!Zvho*5(IY!U7z9 zN>#6_;s@gCqJ6vy4eNohluh}K@lc-?+w3H%RPBH0;u*_-nEV7R)2U)S7`(str)w+; z)!{x3e9k?eTZVTXuYhowuAkuLehjf_(LEUFgma+NJi{>i_p%)I6SBt0Xjfn%?8`?IkJ|1>0Q zA9Pk{mAwpw{CNk4@b@zM9A9&H#_$b=@OC9iotpJKY}rWFRBqhQ%9E=JKaU9^-T$WW6Q=RH@YLv%}Q5y!gJolWy4k>nHv;{oxYl56%6K z5*|1Y--uVQK#y!KTw%;E+O1!e-i!EixsQP8`_fjp?Jv^ZnnCIu-=L}Q-qfo!;B+lC)ZML9U6f61BmJ zjWaXw8DPW4TldiZbT6XymK$r1jbJDik6l=hvSzTs$MvO3DWhtr7`G#6h}PNS(!n+%J-SXwrjsl(X%B%7Slfr4Z<8bfJj3Lla&IHw7gQtNYZ4nFe?h_k(!`JHZzl~v0lvN?)5EGM=y+CKfxgU}1 zZ?|@ZI{F7bgz28=ZzJ)@@^YK=lPVX1{4*&VnQ{!)7A?l%A#!U7XJwrjIEd(1V_#Zy zlAsr@?SGVg=OwF*R_iv=RuZ1L2`{BKC4L{zhd_e+&Ur;~Vv027@v3lei$l+4l{@z> z`K9|fWBwEIIFr>>VobGTinCiF?HGUUkVASFhV1J(_a~#vj*I;cTkwTe&YUbqIDvk_ znwIv-piQfErcP-exQ#om6sm-0eJGN%Ro-+NW8T)w0Gg|pR~xE*7%7U~I4+tYmEp9z zf~@{Gp?|CR9KWBpxppSFiEuS`*pbsgL%FC}0*s-JbG3qALau{hW)G4!SuZh8NaPl` zMj?34Hnlb;7}J=Cg7M@bzkeCp7-32KW;bp3i>U&5ZkQQ7^2MMA=mmHrdAtY=T3AZg z=+#@#4wbich8WBjtKNv-m~gZ3g;uMy-O4i|lY#5BKPd)=n<}5hw$M<=j+@QoR@#wn z#IquQr(x0DvPG#fPD)HoKAEy;im9B9QQAzWUCbN<4OkN4t-xj=snT8Bc7UA?je~k5 z<|)!TgUe2NqgLT4kU3CexfN6ui(iPW(@tVp?T=56ey6HX2DPOR)2|9?29$wX6f8TO zYp0cF3m~cEZcV+3R6na=6NRTCM-Rb~o{ANQ^HN$Zn#Z%5VSbcJ%nqL1dsCXK51S$M zQK!d*%5d+Njz4h^`H9bd9l9)_jD;n4cPC!<5?F z(4rK|S>RrvLF%I*V$A%I4iD`zJX-8mIcFy#yp=A3Z@o7jTp_{eS<$uAAci>8kk=nX(d35?HBvOkzzQIT$wqyOV6hf@EHC)LKfQ$D+969I z*cTK_(gdN-Wf}Pq-S^fUau}2vDV%rpS{yq0iOZ_t7n^K1y99Q2^?RSV36(W9=^xYJ zyS9hy48Zmi1GSlTQhA*9v=DyzBYm4kHNzkOrh&U!wsF)`+dXz2s9{`T#r!=9Z*LG8+Zh{70E>6 zmq2)24(V?24^~R@fhq0@ckZW)na#AkRJK+oA(`t)OOZZ!T zc9AXK80aPOJ%`BWCEhXy4hvDLr5-*c96#h2h7bsutKehOYRjU;#38>ESA$b~SWLv; zWd!+H+SzsdMn)@giqb1&bX(ifZJF#zpp+dPHER^kAtyAQE+v)RJ2=R1Y*gFXTUrPY znTGg_@@m~11gWAD?k5#*qVK%bBRv1ZD~5n`T@$sL#e8UhvX5s$-BqW)nqCR?a6q@I%OqZ{V_FaK@w z1c@W0T^P__Rnba$dKoTnJSk-}P4sZfGd=WnA;=Ra44afLcR)jDP4 z;6S82x!S#bZ90|hBi1${8oJe=xJS_hXov@03K2z+Zqn@P&c- z`Y%D6`4{b8HMy31P=;^iBSBd$tbIflEXb7si(_kB{c(Z3N#Sn;|Mv>sK(OAGvhfSm zZi<1Sk^cN}!)%jlqI-)Rj@)(ZKV)OadF zY>EyB@*$P)Z`u2IZb4X%LM&x(mqf^gNa=t6>A|u zD4L1#w_8UGgs3vNVj3g?6Z1d4>6PvAv|{jA`0F>pkKWk_m&R*4361?I^`uFp_C*rM zdCX*+E3X+|a@=#*RpY0tZ5x*saRm*Z{vsRmb1CuJO6NX3`uU&}nH@*! z?F?Hg+2iRv-zs>Vi#phe);%$8CNtVl_39!&!t_4T8;{QF9kv1ZxxdaRIi7kO;gN~ zxG*Abn9EWxcVH(F)vnI@VbT28eqDzoJ(u72G-X>JhZPjquPEMT93K^dX(D-3fp2A8 z)EiOQ;4V>t{@Kg((}0}~K~Id0ot@QtZr>_r9m+PvH$pQf0L-!TkrWO&)#$Hzs_B{f z;!gd{uG3&^#SE1Iq9Joylw`m)`rAvY;P4s`yt0}`VEs)&B-Fw%ejA#0>q^jQJ=t^Y z(-Qi%ZnU=XaZc1YWjz5<6@oea9WA>mY-1b^M0R2&kNSax} z;$M*1QW06CVP(d%-;Mc{cD-3d$BxM*>a#?U(@GG+MLgli_zR8)bU8O%M`dx0V z;Zve^mz#DyRVCN&kfPNmKF_P%BSxz7S3KHvfhtCqXlxm2g`p;NhoJ5|mBxq`lMltu zpHE9#2spf77Zo$UP(#YjTNEO3(eQNOw|gMx(IDl_zrh|xvb=^}o|(KO^jVkK(V z`Tdnx&agrNs(P;(hwFj2km^yS#o_&T)}BaJ36i{or%p#{&D6?QbP8!g-o{~|LofqK zOHDc7q!KAr2buvy>#PKNyo07H4RZ~~R}SDxT<*^sYS-)fA!DzH@HP+ad=wEeAk0wFxvsDsUor+TkvgpM{;eNd4qlcb5ai71qsoJ&j zvdPd*ku>SZrnC+*B_xbmrdjn|`x~0i-VWKmIS)>AMyLPi+ve6vM>V_lt(I zPt%mCdinP6DrkoP2s=%Ok;)WT3n_KMYeghF=r^X9?<6b?Y&(*0Y9sYR5L2}1y?9i_`281?kcSK}a1ES%zkI=cDU77nn;V@noJYB8e!j__hLKjZ>4d#x$E(R9 zH}tI+58g+F7vymgO+OqfZn4U#Rlfy4p3wr4Y!H0sH6BM%s;Y9q`NxcbD_})h4hZYY z1I;4h!xRY>pJC8?uM+f2TrR(5aa+Ra>>X|7N+eV-$3(x%$jS~!W2zz}rV+W|OF(XhDyb37`T$I?B03w13C&6f@!;H!iCmro<6 zRnms86(uVpYi1x>FIZrd-i%XZKlc_L<(?1B5TAhU)_VC#Eh_)?%UYz^T-oLhdVCT} zm{y5mr5zvgrMTD6foo+o-?=BNAytY}E|#J{0^6Jtu%|LQ^Tb$~i@gqYJEbq(yff-vS`FMvl>cpqYCHbM`N}sd4)W|w}R&A^@Jl0mI zfvAt-vYI*H{V-@(h#Q^K1>GSUB$>NBzW|cFg1>97CXA*5880s8=KH-*ZAw{oa%3Zn zyU@qp#p(`#LWG8frswH;k%XCJrVph@MFh3O8KrZ}j34O5sM?lhOG^o9#GF$9#y4e{|Hm z#k{X`X0DK5g#l*Ao)(`;6{`*ZeA&aC^0ERV-Tia40Ly*uS1u+6Mk=BoF)IY8TN*jL zwJWFv;Sdw6YawI&4q1wTIr^(btaIJ8l9r*#o*!4U`g%lC3^$xl2fft68sG#tCMBw#ysm-*cGKSXz(dGPnwg~-zVQa4Fym zV35kAC|G~*Y~Sx8q`*JW$t6uU{haRaZwf#42p*M<7bEBWvD)I8+Mm(LE)q>g7ry3f zd@U6Sd*1OS{&0Sj~#VUZ!w$i9U!GLl{Tg*rI^Oaji$$5|E4xN!s;50h>_-Th*m<+ zR^!y8y885gtjA>`WwYAN(a8O|;Q>esA;3d7x3!Q$YLBc4(}PnOog13fRK2{soP0gY z^mp13S_NsM<$pyFvDgJBLXJ8-aM)F#LLUU!&9;kD4f;xQ{IO8Z?xQ?w=DmTV3~a7- z(~jlYTAS|I8g1zvYC$h4G=}_N1E^}FmeZ|eVf)i^c>NF&N|~IvE>d$Fc-T;&uj?mQ z$BSxCseaNb-<+m}BEbI5SNI5p{xgB?VvN)tJ-1S)&2wFRDvY)I`w{Yl7(g z-{s-ccew0VC7;iUM=IX|2G8HW4yLW2BGOr~F&DPxUQJ0m`4wyV3(W%*oeKp{*?gc1 z-6l3O$+Trc*rT?S?5_;|&RwBtZ0}lZkFw)0CPc9*Nv22p1%<>F&!J_Nb*z0AqS=+-sj-szTDF!2KZqVT1Bjsv*$C(tsuN$_ zkT-r{fA{}Hj3yAk`*+At1de%sz2o2U@ffhx7SI(g>3^?FK(iKi{I8^-wG`*`npiL; zrj<;Lo9o*&zmBjb7b6)uwE~J}pk@Pi#^@+s9meeWSmJT_$99i8{ji|@z52?WD-tTT8>E^{%Y4D zy1LIE|B;@!n1Ae{cb-StoA`!|Z!&*>#cqLhmbStaNXDd082ElhhM$XBI$?^*sg& zizyY|d{EMST-usRztR~j&6bHiLg|o+i&?WiZn?#|dI{?}J`qwe%-q~O5>bI!^YA{;bD{`?wsIK_s< z&q`Z6K-IV1h@+tz(DafZGoJo|TkQLR>%!aym0CI_W0e?=(dzs-vAA=(%zpJT9+M{E z2Um^DqxH^M;g`Jow*}JkxT)oOH|G;iz{f55f!Ea3`Cmxpmdk+2$ZQSe#(>#0sQp0! zO2RHdvXX4PzVKBgQyW_JPtWePl%b`WnbBNn{C~|7z?ng(!6?e2WjO6EWeT(IlJvvt ztZuV>&>I`D5~99se!e4aK2O>MFCi+@0mzK@mcWH}-?_&aY`{9GNvqKc^< z1rv-*730Xq+fTh)FSF=pyW7HMzHmkzvvBq5&9`H6o4Tu0G%{y4yld(8J zhr3zMXeCPCvwX9Moi10?giu@*Y*8GG zXId$a`F;0lD;4!FAu<|cwgQLFhJgZ7cu}X{E7A=CyM5gC-M9XyC?YIjO{<>CGj*{e zM@4~OafUC%OP%wPOV3LBnq>-Hn6(w;5(!g*W1)tz#BWa}Qu1=fJnC~#2xy1(?Z%33 z+s|==kRJZQKK^g`9dF{sTYO)4)5+HkC7qpdo9#D)zb`h2GA>Y>)egC0jGV9JnkwMC zm#;**h#S$Z+mNE}2fLhYbfKNeK0^#5~9wG@Y=?q)AOr_2iXtyAaDk{_S zdPltVenI;0=xBmjEPy)v_N1E*7x3;S31)zzd|9W4iDu|g|D@qnaf@6BG%t@*+S{eXaA%ml5Dq+NMTNy9by!~_O z4NnX^`LDg5kE7dXfX!)OT$}xRykRQWGVae&@k88P(i}-yfpBhVsnps7(5@QpNB%?; ze2F8Dz(B6q#K@23WI=MBA63tvpmLQEWf}iz8VifNZL8=>VCqU?B{s~2^%W$sJAeF& z9rr=)@BRt#WIJly^g4e$l*d^ipw(`IYw-Enu+-`-;E?GC?ErfB@qCJh2E3AFY3*Az zrsoYM56J(4pVR2t;3Pm-~ya`EW+EhG+Acyeq3&Zw${gD z<)i-`_^(9Dt>PwzB*<$3&K01Of7x-Xz*DQ|H?MxLPLmIo;w>^PPs7S1Tt7T)_#>vH z!}Zkx_M@h(3`>9m=vOhZjqCzSkAcL`Tgf_CmhY2|I5WH$5jxeDkvH{lWs+lU*%=A> z>l=T&igc6x+Reu-+&wSor*z3^QV21{Cb6}+$6rTaPg`^-MtZl+NGQM&007d1HZIOw z1^UNzH}lpdvwslR=RFrj^7*i${{t~j(k3h{>$MGa1=a-5Qk4_$S!7RH=FV+ymqqYN ziWaqmo;!zx+*7sh^~a8mo$_Ao&a}mu7B6ANLRD}#oA8=D2EQdu@VZRkwn6J#{n*w1 z&V1|V>yMpC+nV9*PY-yeu2N)`MBM^if6nrC%$w{B(3f~VWFc8peV^NdiNt_JmNe}o zUM)YXWcH@nBbmaj3qV|FUKb8k)u_a>y6_u;^(AF5>F@ZsiW#pYYSRYdhT^R02J}#J zdSO@&`b3DsLZvpERb@8EB*vuqV!6MikQ=mkvwIU$7J`&mdbZ>;0--RFX0Q_h@6hPB zw`r^e!47{z299)0)9pvcRjA^WS(eJNO4nk~_fe6CmRMPZjKpideqEpN&}zbP`VM8x z>NGcAsM`i)^7ii1uxMO5dvauwV&ys#YE?T6J&4RTRiVQUQt?lxCv!b5Q4j>Sb<3uV zR*Mbv{#TusI}2m4GHKEBG)McFI>#Osw)R@B z{IP9zi>ao!Jh}aP#wTa{d~vh)p{kvAyE&l$w`^#C`ks(2wUV-0=qQ{5NUU-4A};6S zz*vvB1Nr60NN**r=}JPUc_YDr66_n^nHsz1aXwRK4`4n`ah5_I zn=gWf9mVA;8w?&NF4gy@7~QG-{Hb_7=Yq@lB?p5Ve0C+?ZVj1tYXSlc1Z?{+{oieU zFE=j9>)b=NAWzF<^N*$tC`w0?{urgk%<-6@N_fM;ary@X zGY`e5+kc9t+x3X$>8yc`c^0S5u22t{#AoPGe;QZ>@`5RuFpA}%^;g}7t8|*s!jQ4W zTh$r436(~}u$YL5+@SVh(!^=*$p~3zyc|^REx7wCmmBV?vQSE)dUT4DZjF4k-7Xxk ziR9}0&#$g}$(I_v62r0o_0ehP;sx1TSV1FOR?X_-uK0>Y#jaf&Rr^c0GBV_1W*9m+ zSEQ~*jxQVC(^=?ggUjl{?aYZ>8cmmwIdSF?aymw%29r=WYoX<1eK-CYT3CDSlkICD zZcbaQkQv*u`GMkC?2w!0~o5#6>61{=p2kpSFx~er<^V&MhXjSS*!Acuy^wGw*BI(}zh+{R{sg zRV_DPK>!s$%pjYu|DvTV;+!=+Yn`0cC0cMR7_86(Ds;4E(CBcIP^-fV6^W3aA`_Ca zBw<3QB3w@$liy$-tzEuSi6dEWO)K*hMg|GfWP3-Qrs7Na{9@zSL@#2?T6Aj8Cv;U{01iQ`SPeR!wW{fNs^rQ4*ht=w*3-wQuQ84Z_IBW8nivzIe%C$u#{{=>16?!Tv z`53

r8%i-HYi?qhN_@{c%p4(=~h78{#@SsWZXBZUbg8+Kw?%Bs{T^>Vucg{RIj^ zl>7TA;K)_Z4!ecs5fSPT<7Gt7wQL9ML)3Uq@ZAQ{HH!|y&N0}XuXlY#;I9Sl8RRH9 z6RLGF!t_g>0o@2B-6aYPGXC?Fn3A&l?iSYSYFNh03CzxobKp@rAr%#COCW!8k4xGR z@unzsk~Atmr(28M8yXWptnjd)%U$`MY$WOSW{!-WUDeG`3XYu}u?->{N!${g!XB`+ zB~Z2nd1bxxenp z%aRT6G~_vPn<(q(L@v8x=Py`NaARN-iybQ$5J;FKGMZpL%}@1J>9X$k$6c3Lenb|X zw&W^yM_o_HT@DVu^EOyjkQpSa^hK32@&`G#oC{~ypoPVq^)!^;IbFBQ1C$@X8S^Wt>f{I8{Ajz_4iN`S2s+CSUZ)W(Z*#NwYvf+qGCR zxVsgG(~xmz$4^QjJnYbl5b2CR1bA;CADM>|e4Vdh-K&>nb(?c42^SPz`p1yH(zDlFZfYch}PXLjP*VoeUc#Xatp4#PY8 zqS3uwx4s;#I`4;*N|~}BePXhk^yBncNvNm<{U!%-luAu<_oSE&-y*VrR5ovdTe6A8*-ZeQz| zPE#qz7Zik{<`^aSA;<3FxcxneTsV2p2Hgv}SlBj>NTJ(< z3>+MsnSs8`h!PJAN*>4)^wWWCRHF7oVz?t#$JRI-p|GcY#g4YECnklp)~n4+L_xe? zj@=g9^p7=bA>_=qQ@GuKFuIroExbrv!=v2aJd~29m&-ux=O?XabjE@a#i5a2g-2Wy zCXr{rXZOXTN=o5xM~8yvR&P#7XFuB9>Nk{Pnj^=jiu_<3wBi1pyvZ!mSD)kXy4vWi zQ!LNQIypOzuEbANXTvbEgsPbWY~p?jk4by>P19cu?mp4od7TtSQr9dxm)2g9QQ|fo z-i57OR;e-Hfec2fWWMwR#VQd$RI6YR3w?I!*3#X0z5ng)`%-<;m-m-_S-V6B4W2%W z1{mi8US38Qz}Gg&*dmwYQ>L0Q^a}~3@vPP5+oq-ZdN+a`Ny_d#=(cGaUk4YC!dT}$ zVH4kwII~_0IF( zEf7(-C2JXdJ*$TL?T{jJRxJ=yuHYD@nKZO)tp;3^)+2XGfKEs6x|7gx)9M78oe*hf!?L!C z!oZ7l5OmaxZ7@roITbHUW6!yBfqTV*POEwZjWM$2pAB(RDncQlGwP#-U<{6Cg?60RA8KM~ zOo~Lr4WXG&gq4IPv?sn$X}}156rXCybU*9fn3k~;H$H8V&=2^C%!mpLi^W>a@59l^ z7T;i~y&1VVSRNQSIMgFs!-AZxL4nR1u&@!WPdibAh;?4KSR*>b zd>(RznX}v?V`RNfkcv61;2iTD5-fMmKaC>zqO@wfw&KZn@fc$D%i zjeLIr7dI(o9b6iFAY3U>$M2L4&qn#^c{ygkY0%mxrRUSV>w36gihhDxpn1C&Os5eI z5~?6jN9p?TE<>>oZ}wS-tOq}VyZ{3%H_1-0GO+sYz`X!05}tyiPIh|Ffb+oYPxbkE zljLHAh{cL9=2ml@maVLw<7i3W4psZUv9Eg*HCWR>BUTO1^4u zI&ScL{khg=9H{aV)(jPpQbgMG$X{Asx>p=qztC`e z%DRu3EM>jf9w`)oC@7g;gRD^{zgfvOg(DZh@&Y`N1*sH@DiNYMP0k~%Z^YcOIK;!X z_@G0_&U+2BLo;jXEjgvd6lt7&b4-_>!!?T?WaEb8_@=7kdd~+`8)e(xHt0cJ+^GRJ z8GwqaL~U9zfD|GA&t_(_z7Ij8xz2WR5{KQQ5hj93yZ+nk1c*df6AChe_Cq%H1UO~| zr=N+brd&w%ku{|XpvhbTeheA$z$}mD6%dgkWJ;|*S*LqOYjk?2PLb*PypvY+O(eNw zEZXAI?UbOS)h^FjNh;-X5PRQtuq2E5D27}b3z&AgcOrM6+`}@+y+RAk_4X#HE~S{o zqcBgBUdPEG$UF&XP3-OMKjLRr>x;(ZuIRKhR@K)_=#?cg`~@1B`QlM?u$4~(#M40y zNKirnHzUJ<#S4}#wo(JAd(2S`oJd?2EQa0Oz?oT?=UYzwiFa7wfi%tVew>t)mR@?B z?%_p6%2`GJp`U#euN<9SXS=l7VCGl^!&cI%7)ytCvd zkJFYL_7+j-bB1$lig@@qFTVNV2$LO87{QI*w1*IZN$hF$^vui>7?mO^$)wa%Oe;H( zjJ&&d3A=(=DeaBt1czo6{6RmmV1g=+zPB6nsR zRn0xqemdVyUEOgaBb}QE)7rRdxcPyV1rA|zQ?Is?G*w~*Tn0aFBL~oEC<~j;B9`26 z50hB)HGbBH#~<$RlX5UN<5^M!gxG-7~Io+B+(1D9~zum|xYcFyPF*NGC z)ipb0#{#Yuu>=m!pvvlM%tu@Jc)nL9yHoi8?*bU*WMHj|hQ%74%vl33m5fvh9ke)_ zW$bBfLpp_nk}}RlpJ1@G)8aIDSwia-g?nOclB>;*iJt5yHp-@BzWSDW{?GeIORUK3 z7UMlY>%p|upp7-moVaD{tS5`Oa^q@s*guN;;}n*nh@x+7R6==rkpMr5fR>gKQ`QY! zUy4Z)Pfrv+-nN$q@)yzbQiEk21oqy~^5pq0W!A8;DF!2Qg*m_|^k?6(QMmudtuy0E55* zI3+<_Or#wH7i+5J8n1HsH%Om2p);nmH6RbgS7|q4P4V9%82G%q zFC)V^S4apOy4qD5wh%=`z#c6(NWQ}#;Hn1OP6s6039AKb@W8dzH=^m|COw8jv5ch@ z+E3?#wE}`+7W);o@oYXxy_(^!7RriyHPj77Ow4(eEnnaU4~PRcA#$k876_yBj33C0 zU}@}UfGor7Phrn#RA}R*s@T>>T`0rIswRAQ!WDR(n%GFJNYwQE<7U`=g=W-Xc8VGq ze_a8jWvnB7^EPmOG~oBR8Q0}^8$2&07R(4I#Ogwai-5;n>hEj2aF=4d8srL`SfuxG z1oy66fsm0TWA-sBDk>T8&$!KQ2D{0C`O&`;<_*AzxUUqHl>dcWxB$bK?CUVYCMBzX;As+?Ss)Im zv^#79c1Q*kxPxb8sj5@Z}JH?b<2ZY~2KL8l-e}kj=#6(IBJ*w|%&aS*8(chqAF9A$#_y&fll^j~nN+E+S zCgZ?C(iRuz-owf7A+5HcDqV3)*1KxIt|HIBS`EOW<;EMG=Z3!q!%cp?jqHm;~`<@m`@96BO zO`|f3RYM*UjKi7R3CqYt4rHpUA14EeTpGnrEZ>Dtg+|r&Mf;^uZ;<%PXxR}`By-|` zb6_Fn1+aY_w5{nZ9|~7aoulXnMSg|`Yd6mko~@SDB!!T5SioI8^Yec-i_JrKtC|&{ zokzF8y8N!KGHe$noHh_it=4w}YI}=u zQC8{5HhMrWLWT{2vAJ@2ETjK!?kJo_n@$&Hf@Is6A5qo_z$2tsH)`+0psAcFxgSk~ z{rA0COnL;Eta_k45J+an?E8xq8<4$4QU7^z1gL+%xf4Id;3LT4nz6!Jn8t)y(Q5)Z zE_}nt#4z5An@pB(%=3$|IbeRbjgH#D_E-i@L?+W;k5<%3^&uvb;mz|zS@5J=d^s(t zUvL5}KF+6AEh9@LU}&i}7;>n{9-)4vAB?MBbOkvMNJ z=HE7DLsepB^5&`=FJ9>96E4$ry8E5WtsAoJ zH0xJ~v8o!>U~{g*dfd{q-Ti}F6sfpw0F_JJJyZOdJ{<07yJZES$af=CspDG%ArMRH z5a{B5eN82xdZlJ_oHeRP7p;;-0&^gR`jr$1}z< zO*i0VitJP+j%ONdzW~91??z^gO*3V5KIDlv^{R)^B=aa2 z)e#rxS!uG5(vq}OGyhp%#TkZ(q>CUChZ_TF$OKH=Wc%I!+ETN+vO0Tcy;)E}1jU&S zop^Z!#0djcBx-2_SO;@+NMO^iT33sV&x-*LjVN|Hu2DNIW*EZ@3M|8Y1oTu!PeuJR z1uff*Ru)3fo~8I-4h_s*O@LB(+WjC9veTrZR!TUL25;Q{ zR3L-BVB$dD@Ocz){y*-%@~f&gd{ep`4&4GucOxCrAxL+3r-XEOhjdBzp}VBJ8>B(H zVK(n?=F`kyFl+fimk0Jfdp~hM*L_9BhWktu_@s6uc8!y6x`N+^*~4P)x9TS*ra^cY z2FS6s0|pQe3cAhl+M&N~F(A8jsH)Iu1zlWR0G%HGnlcf#z5xOpT0AYlu}bUuU`kX9 zak#V>gH9x$`R`)ups> zq5;RD03c)nXumoa6crOCczEP$?8>^)4)P@|;8EvB8>Mm^P+VSJiCfLo_pw}bW5VdB zC|0+O?hYq7thp?Nxhxr~J)0vEXzL=y&3qt;3~&nf#78K5E|76Ko`(dOh=>T=7T@=y z+DaF>21FdDq!kpzi{*6+K*yJRBQ-WFO_r2aWm07_>ykUrWg`E&@9+Q*T3p9yp@q^u zjMC%0r$po6WX{(PgI)=ZgS3H(gh#3rf>?wOyA#3CknEg4`G1 zFEhIsE$k}a$Q#)r{$Xk7qKiO%A^o+>)asE(Sf@g3b-V}}=_^qKmY~{|=fL$L9`hze zB;FTDfIJvLDIdoz+0Li>sC;*fHife5oyBIx86F~q4x%WAV8=(= zN8Vy&gT*Z_D%uG^TTkJxYqX?{G<6hJ+H`hNo)|g)RP6doIfY(>p0V%xdOkFYqpzd{ zbv)!I9#;qXRTzKUdAjBX`B4o5Cb#6w9}9Mr%$&z*m{V z;uXl=#pbed4=zC)FC`{8@EA2V8i%IMr3 zC%QgusU7232{^U?ZK3)F)6{ffk1#g*zD}R*JZt-B)$LkaKm_Pc89CC7#?v>vI3$-9#! zBtV%xwZf;j4GTo|X=pp$a2SS_s4keyZU{t+KZuYyiusK;V`3@+Ags-Xt>}uYO(Ayj zgXcG0ubj1d2@?~uT&I+A%>7pTEy$aD#AagNP_x{xthjs0yzzAauTA7$o#DG97$qiR_L|_x;;9 z`X&3wBqUXYVw~~N+yGZwQ_ZE(z{G*_pm+Nv<6xlB*U^>-^3Rf#Romo4G!GuQV3$~O zW*V}P#caj0=0D%^84bwRQ26fiHOWMNH1hWcg(f4AjGD?SDD(n5>*j^$RJ)cNNAQup z+kxE@H;K8%?&EHxOI`qPpkq0oA(4o^TBZ8u8mCB%T@I(VMWe`LlgmKxdPc3RgnOh^ zd~^#JGQN3ieg4@_I)~N8g4rAG(vZwKrwQPZ2|sVH zljfLJkvP7XmK(Ecr}UrOm7*^B{|(THNfa9>FMT+voA?u!tcvwX)gcd~6bb`AWt5xb zQ#;U4q0p#60@3%qzC76j*tHSh5oTCndE%sSTbF^<2Ae%WNUzi9<#un6vF`IkTq@`_ zBZn26y+Nqc;vDx_=h~Vt{J^SHC1dY@F#x&joC^XlJs|rh+^f_`SCSiRTmI;ri5vRt zoa-16A#!bCyS#P0TpgRxjYEZgsN7(Jc6o|ERrY|(pBTXZi`OL!+EkN<=y%COyU%K)3dMBbxNQPM1 z6g1S3qk*BTzD^5KSmq!oe#)> zd@iQq)yT;C?z?pF0vQRK75S0$um&qG{l{wyM`C(CHsc}m-M_Brkb~t+@d5)QL5%|G zVVU^IFSFUdFfG!aUSD+MRCC0Fpub8sbQ^t~c+TsUlrcpSyzGmGT1U^CW&Ybo3CTD$$Wd2ctbK;}jCfZU+qi5@;#aGiuDk$6=Mi%x1rYq0-iKTA(nI<0x?c5@zZ~;|kON*xedBE8-dUcCPdEJ-w z#u{+to7p*k*Us-s?@o#75@5$}_{}gZ=NQ)-<&X~pcM>QM-z+2o{z%@wvC8g#z9zHW z{!vIIL?27m32HUUtGj(jHQr{pcFv2q{YB4g{J@;uGMRoXa(-gt2c?35)A-WU^WE?J zOpuipkGjl6HlKoIG>MYa8L7MMMoyyzAVbB*#~>dd2tiP$N_@<;QjD*XOcd;|z0K^Tz8FW@3Pt*3L6J#`$537x}ghlE{ zx2inN8{+L$BYKrMVcw-e+524H8Cj>-8wdCFjMV$SeX$j|Icuf+L`Rpv$09raOSUv$ z**Y60B9B2rwkf$~#luFYE)Ne z)8W408$KNBu>h}*&ZrAV4_ON{AZMScNHAYMIX;%_?=M`vJOB`3Z5yIORI=ZSfmmnz zv$NlTtWpz)pYb%+j_e#;?Wf;Q?|+1{%Kmc)mZg1n)o8r-=jR`~akk6#VK5YdD*!q2 z^OrBb>=sc!MCjPq5J&Jl;epTSqbsR*^_99idSD*fan2n!xyrxtcYD?2pfzk+cbYLf zknNIvLD#!^dp&-07$*P7;n0@J(bdt}F@x_HYgLopq~X4G=V4y)o2>rm_!zu{#2cW$ z+70v6p;_Mz6QMry9%xx2%0sRGPcxA@;D%6*HFC$a@wdmaB|Hy<$;9d2SF)F%^W0{IJL#bSN`Z+kuJuTmpzl3lbR`z#NnGQD-RVu3d3{Co=2s{ z))G*oG2D#*d{W1BiFMCULSwODi6n77`&2o&xGwrH_fqtfApcUl+z$%cro0@42RRCV zr<8pYXtAm*OXUUwMkWroCmJn3nnN9(CRCJwDD0)$AgBtZM>0OX3?|51eO5sa)(~1+ zTJ;ZkV(@{Qj*hHBA(HmZ@aoU{6{5`;6;J`qy?4hwm1P_IMQ&wcK^9vd_p zicKP?HSbHDW>0k#ymPV0Lk8*SelONOjPso_0q<#tE;~L*JEt&+hUkn^;XX5RAu@V8fDGF;8 z1CfvNYJPkx3Jhn8)tU<9e&XR;ks3x%?g{A?$>vZY_V5QLL7fHw!JtA;VzL&K;=zyZyf?0>IZ=aQ%oLuikG|#KP9h;ids$1XD zL~WHl`noYtX>Umm$E8*m^yiQC;h^9nix~+8r=pv~(yH8TT8ao^G%ubymel6B=f%12 zegSo#W=n>6WpDGOCAi7AGGre`40nK|{9sH@t}T_>-~TAn+!%Xc-f4Ede@t&hRZ6HV zU{IS#pr&X%J+{H}hL({7a^bQhG7fc}`QOUocd;RafarX->gwvy4gjlgN;-YGva1(} z=4SPhiOV?^g-3^$I3~u96oR&5aRGrGbf_e+2OFJ(*pPT+A5Hf5u2-EVl&r0hy>H6U ztZ=2CkZLX<4}oD`?cn^3Vu(66Pkuo5w>nj5(znxhLw{Qe??x0&IYXwi%)v^4i5P)< zWdXHjas22v0isyDybCg?-MR>HT+9oZ1Cyk1%wG^@p8hz35M68}=cWTs_Rig%+WAHp z?L`yu4AeMw8P+(LuyG|$L2J`RaQ7|oU^;ww!rn$92BAA~Csw3q<=vXN$Z?pVQtHAc zbB9nw4^hjmzECXMsWXV#ALo9-hdk2r*i-#oTIn#1w6EXAcr-px(SJnH;LNZcOridX z%t5Sb!(*kQNlDX_Krm!c+)b#+sryif9t5prY6>z#>;Nb#L{LQyx0jo@T>h}on%>^+ zr*TQlkx3Vo9HOnswTz!BSKkf0Qc(_n$OT#1t5@eu=)r{M>#>J-FH5}MKg%-a2r*;8R zEaT`V>Y4;v;KV<_*1QgaaCLLu)8^UWynp-{j`WGo>K8zWW^r+#tVj&@jU_cHFq!;4 zSJvFIR6X%m* zB$a_bB9(V4Ln+WP?AOL+?O7*3a?Z|TgRBeJiYRM4V<3l&So zSRf)J@A{`6i1-=9V|UG3jUo^;IE;fb{vvk~S2f|cy4U!9d9tSS$QWbyef<$06+^DT z6cg>T9q0qYqSvArd^0wYg zb#9>cY3D7n_#lP9Y%lvUBWTl{wK*X~kqZ98Mahb`;97(e<&)O7N0=NyWXL63rj||Z zDysmvM2?Dr(qhoi;C)(|13mqAaY%Tv)g-#_8hzwDcSSG}pL_!6QF;xk#J5@kCc{3f z6r-P#FM|yzo6>qkqWH+WL})rFbFmivZ3QIT_}jrVV}MoNA%McEE=16u{oXo@V5=EI zkdHErveIJhrozH6<%kK%z_Dqs_P@KkBkZeiX6%F9$MY7BO$k}ACFF5{Z8w>jk+cxW z#i`Mfv&9$PnN4LM(N@%lUx=Y+pfgF~(j3pFp{qsV!>+-BrYpfPQ+}iULuWCuE&vz? z^0pY9p|Cssm2Esp=a&`$n|_$H%yc}axH;x81KxyiZ+cS#vd&CRWj!0W5oXpN79i|cet zPVZ8^92SbTnI00#=W>M1px2QJ^9IO6`Ph^ z&EQ6%kSx(Qbw+xIIRHg#|GOiM5tJVu{)$aR!2U6&7@nv5@bC<|zrUrUBN`D2X~~7t zdXZV4PeERO6C`^}C4C~OfC;sVrqkEm|JqO`P~P{V+;ZBewWJvVjqeNxkvU8|=xcvq z+?lUl?Onbd?%OU2G;fR+mH>72(C*U1lUI~BgborBQ^NJZxVkjEbix0nHb6`p}R|%M^Bqlb{TEo`t-YmS{+*hDa0j#ZAkN##1&=z>eWC z#`l@Bu%Mv6?vHEr5bEv#A`n9zdv$k}57fCpL76A`yoJW@z3kj_{`%C=QSo@uHV^gr zyU_l?asckxb@;UO-jq{6Aa(E@!E)xa9lG?p_(qYoK_oKYwTs9`@fPvVUBD)-p9RG~@-( z^AVjxtg3ZG=n#LPs-5(J!nLp?wL?Th${)Q*yB_ac&jo05N{=PZH*E@9eyv2ZP5G5Y zx>>Ll&b0gN-Zxtzhuu~DQg^FfXbmMn&Dm9?G|VU_7;t?QG++*Z`B5xM;t|;KdSf@) zj`z%vjH(e%vAY`Z{2XyTqW22--O2a)Vz3#%H72HID1$%I-OhJ3s_qxjnQ##eoKV%M zolP$^(x_Ec=YV)O>D#~#JHVQ+kW(?sHf#Iyl~iXRaxg!_$Zsb~9;K@NmG>pv>{o96 zwGz=Aq80t1il)0)WVa#T`7ma85JvMW2ad!&XU*zn=kP%NY6q&iT?F^~^1}hHox0`iXY#x2nNBkn zF1uI$aZNE~oTIOf^aaLLd|Z#I*3_1^7L5Xt<=^}mNtyWHAa&Y3lSsU;l?YNe)S`=_ z-UoINB{>`&5k0G2+XbjXiH=YGz;rCVG;McouAT3D+2#-Pij`3LU;J`jcW1dkQ1{G2 z=POaFT~6N$@E0cB$ry zv+xs0&#@~Dk@3tb(|>NUvb-b@2j{qNt5q^zN4{OPoK-`vcmN8rl-kRZ$s) z33QkOy5?`=H+=(9gp&D^W{YA!FK+y={L8=1!oMSN?QgGe+Srd5gPq-X=D^$HZxuzo zBbynFRGG!Gvb8KS8A+MS4G1{^I@&n^${3&BU_v?`=JWI=ccc4aW@PN!_xVX6BKQz+ z;@vT7Ha^5m=l5WRfrYDY7si%>``?de-#{no*yJ-JVz~Luk?J>f_cU985Bz6HsyHVn z2bGb#n}^5ykv@yt!QVu(pDX zLmnSJsg8ah2B3W@c+c6UAE>ub`AkI>hBSDT7EQ4W3p#7m&)QW|E6B|i4m+ci?jzpe zHLT9*@eWS+D?vFpC9{$X%tV?Y9hC7N7%USsK+-9|d|@*i+r5bThiIqgZs>XCf9^@fb=?RX*szXVkocULk@Db9Ic<59@>EaN1*i} z3L)3@gVVmdYz(1bbP)0O{tx^0A$1aW`c$V30QC^PUYf7?Rsos;h^9p+!wC&L+xu(O z>-@{>MxxOg|BW5V?;_)Ig|KR*AKZ>xT#gTcMETV!4j!H`ED8#W9=?~h$}iu+oV0Xh zOPZd5&HsCq{qXSMG&&+oRigM0?N0KIeOwLlBsgk=GwA;D4wdJlQq~|JS zN_rgLpe+VD4f1Et_$l|-A77f>{d2DF(DWf7=!my#At8U*pe!jBBW6>zYk8sFYRJfx zjd6s7#ZOl9^2#g=ePZQ*jcy?joiaz+{nZYuCa|22uvc;bmKUV-%oW;biHEH8n+PE!+ zrKP3Bq@;d^Z09WqdRG>r?eJ5cmg)}Y(pZ3NAq*tk?0XXv5*FXOP*eSms7T?^k`rX) z1?E(U^ZZhG%Lh6pCZi@2$PVML1TWmm}Rp$RcXAk z{)ISE$!2P8RhJ}!m7m)C=kr>&kM}y1^zWPT@^ya^eQixP~k@>{+P0r4WX*F7Jofv>M^gTq|WVN;F&-K9VH4`i{ z8uZKG3eW9;xh2|ixE8*TVU8B@wC?S-WSE%Y^=o@)`>>H|RRW{8=fMMPpfS2MsQ<_P z3==F`v*9?`)@*TQ+8F4W)^0vOs*xD2tU&%#n>IM&17V%ISNO>r>iccN3H2Q=#a4IZ zl0N}U;%A!uk9Yq9t6O)X`I>X=#l*x~?_6n^KO(6e|BQf6m?>6JF*Ky0N$Xlpi$zy( zaY3=ZnMHb|n3)WP=gF1vv?n=m8CL8qFKP)!CFL2N-+X@N4~LJ*fkDDm9!O+t28j0;`{$v`k$4#z!V1kM8+F0zvJ} zzAiZ_sTK!8XqDWYAd4bCUgDn-4vwm}KA!NyfHFagRrN};5+kS5Vb2IWeUhA9l;!rV z=}1OX6W3rbF|TtfaHJG@eSKXGgLpz^k-l$b`|e@}e0QShkxQx}M;lAqy!d$Kaa#eO zXKuK){Et)HE4X!J-47A*__x^scbDHi2|5;tgR*a$8r9*fngK9|t0Elw|DUP!_L0!hWl$f&7ZNJnnSAUmh$OA+R*tEentAko>yYM~4akiqb! z(Xi(d7MHe

g7M>m-8d&RGJJoQ8W>^JsFbtd|E`AXMm?gUo zS@H3sNW5AMyj}}A`RA@R7ptEuf;&o%&U^eRc-?O42RjY2#l@!Dwg!EX;6K`Y~G+$yCNqSK3(@( z^S_0Ipr6EOEu#6T&MjJW3)XT z*U+sTmbwK<`t3)V36>}|K2mcgk8r?*z*2Lx2wFuMCP$+bY(CmN} z7~gz98*y=%h5V6;2{R<@W$FIkUs3%T+1Mf&6y4h~zJnZdT3Y1g=6y0f0E-9R; zEjnEKRE4;>xWw4l9JK=s;$@5O-v~&BRaNt>!!vbmhgR=v$8`;xm{S6%Q&m%2-XJ&E z`lkrUO4pWTuhbzbXc@?tyKewE1%gz+#kIhinK@N-NW4$Kau_*(otDy9<;@h!&wPot z5a?fOPupSp%tIAaC(pC;onbU~(oa02X-Uh8Bcb~yGM7Q*tc4ty5S0*;ppOD&nDd`Q z6Q`$HsT_YFtTsEcN@R&fBS$;^EE0NyH{WQ*qfY zB7d=9l`p6R%MclX_*Mt=TFqn;^wPA&@@)(VxcD251gt}Q8^QP8gtn=oKm)tDq~wze zVP)e?LPfHa;(@%mjuoa3;_XjXixti*`15A-jm$jw{Wkr;WH(3r3T&(h#}2R5#91M*BzogJe%25bH zupJ2lgR5O6fS29FF}YSUB^LkJC#b8tFC>y1T2XW{HyTInwj>Fb3+4CNV1Iv211c2{ zv#=rNV0*Ed7BNRYdH^BTM($l;!$#vcbTnh1GfT-JSlE6E>1)1zn`2>W6aF^Jo(QMy zsu!@Q6C>4Y;^Q4NYtn^FUnfr@-ICuAV(rf(<$xhJsh0OFoYC3 z-Z-)ci?Q@MB9UtSZV_o&sm(@xJltyjx=(Bt{-`^xm6geZ{Da8^(4a4*D>+RBRw=Ru zeVQkD^;kdbF}SR#zL60`?R_CQBDA@=+=Q)Ay;8B!As&RzGe7?;m$CE*o=HSP0@f@^ zQc^rY&yT5d;lyOVak%%w2a}xZAF#d!TIJL!b6DeRk72Yz zwoJCJ>+^u~IzC9^6=~IOVrqKXNbZqaiCQ^F2Kp&=&=q7sw$=;uj9jk+VviA{b7$Dp z$&LBz?^z=86_^x2yeSdk2Saha^ng3n`ltx%o=cHJ#`um=y(SolA&bSIcg0$Q#xGwz z_3UbFyookRvBB*`S;;X=a?I7Kimqx-^$oK!5^*Q_@*_~5NKn2k>0c(79m!Z0Uy5O# z(I*uKMkpxABr>*!8v%}RHW@Dn8W?Xl(*IVlYYF;X{*bCFG!FK9yML9f3{h{A@gJy{ zfJJf=;4a*goc@kaNVpe?O-$@dnljiDec^@wqz;Xq?@!p5%K%n&xp7&FWX9x-PC<1WIXPk&mN`Z+3rf?;yMP@yE&l{Db!0It z_Xdk!3P6)=TW)Rl?eQ>jMh|AmjV?hP>#C}9Bx&k&xHAlJ4tR)rz_dy!DeoOjwH6<5 zsKpBNs;B)U>3qr&{%R|_xAqedu^psW9&i$e`(2S#@AN3!RrOB%p5vIc<%tZ&e8T>Vx0+nU?C zw{~a6*LE(EC8q4v9U7|C$%N4E%FS(=QVF%^5-VevCK&I%vQ7o zpnDDP83U*;(kFZ|9OVOmMls$;Ps_Mt#yT8Pb+;)3tq6Rvwv$*2GQ2Xzf|39zy|IM!hkJmS%-h$r0=~?)K zzMUT#G6&g-Y*FJI7bj;zQ&URs{=YW>F+8qNT1&sbXldbnT`ADjR#6fC`BQjybybXt zit4DY4dvXfFd2m8;prKhl9G0QL873*bi>P*Hf9R6id14OV!Yf_Ke4l`dpC*;EIja{ zt(ah!ESm1PBK5>}pS9VQ+bQo1sYG$baH@pC>}V6V;w60XS)W}sCuiaq_)S~(P@@*X znL|K8P|{MAe%?OI%~5utt)}+JNk~zVEXMm)^VrX-b)kcvtSfua)83w$ql|}wj9Kwz zBLM)0jg@C)MRh2B1?^Wc8pGR#&?|8t4;()_IsFuXhfQiP;YmBqNLM>Fu2jq6ih&wW8C-8pMTaBbnH8 zU}8$ZjI21aQ-SL)-Es7nHx=5_?8Z~0h~h|eSTmXe=`)Nm>>B9s3lA9(EtEK9rerI; zT0-mguuXIm`7e zMs(W?vGHuS#rY{t_VG>CZsh+vR&f2=*U$y<^_nq~;fyV4kQkWp<#ue9Bi+QgV zy99^r`uCj};zNEpfUk=a(CdIA8ZU!YD}MhDAJ63Jov#5p)W1tg6nn}-$Fl_`fp`M| zRmbK)2$}x3)$73;9TT%N(Y=9zk&D_{#>WEq6c6GehxQUQJPvXKEai(0ttQ(-Ku|X+ z$F(LF7uWlV6EMnbwqMtU7vv9)2<&Y6kUB?8x?Rb&aq2vT{7IYccE5#S0mL!R^6Sr^ zky6byTwO-v5rS7e`$J3|Y)}czA_9pFP>FhI_gW4hm3EOeb{XWtAf?$h)UjL9e1^`w zh_|U5@Tj*@@V0@{QKLWy-PP8zws_vq>L_n`(e7jGw7JLWC8wuL4_j`*A!WzJpglZ3 zUT_a?sJ8)siEpY}y3M$E+om?^|NOr0K=}pf(4|{lcl``di^woFcxQP6%#aXjh4G@P zd^T%;25Rbb5fOjuYzQ}+ay$DKZbz>k<=~0yJGAPjGtiA^6)Ggsig8VSBbK2AB2ra# zNubvL?l93KK(2Y%X0w?72y&=iH|QBOs)i(Jnk$UOS3H11*Jwjm-YFmeaU7?9{`5(o z+~xisMmmQRbqpaqXJk@A0W^m$!|qReSOt{~#XL?onlr0ecT{9leU`b&$)Sm3Tcs5b ztM&y3bG*7-jC9psK{-t!r!HfbsHG*J_8;KXJmLB2Deg|!B-!RTQ3*LC^?3O$xy8lB zFYcqGqEsO#wE8l!D1qho@u%r=_|m6vz)P8@vZAFWT|SGqpHbdVl_hX0@$=Hjx~#ms zVT6{VqEN_$3pU&Sfx~2WGV@NkqL$XiSu>?1YTXK4rKUDACMM=Rq~BLk3TJ?XpRP4{ zGDd-UM%MRNnBA~t%J5ot+^9JLU$HvBCO53jdz6^LR*ftcSWp&q(!E~!wWcFv!XHS^ z{?uQylfc1iNJ-1%Ek>O)RBz9*SMQ-x{ClS&0Sa@}naM#;`f!9Wg!ep~jKazmWvsN& zs+}06owh7y99U#v*O`QKPARqh91vSpQnLNk;qCQ>zs86)Bm)~_2p|6tU|n@zFnaon znq{x3RECfpojo;tjHJG4WpIAJcW}kgX?6_SIRLP*&&zS&*vz% zk6(e|d$6fpKhU%_4G*znFfV5!R!FsHsg&FsQPZ-_D=+9fw?0rLad~lbc5)0$n%~g5 z^oZ{GS#zb2UPR>IQyfDpK)#8-3NS&ve|(sZ;#wct(VXbJsYkWPaDobXXU0)J-I3<3 zuJUs%;Ve;iG&}8I8pB<{Hc|X-1Vl|7TQIe;qG@RA^|70N`;V-Ma4JJCx9g9OuzyBI zzBg$DhM;(GO1%{L1Oy^RgV82Gxqi4CS9QF#gb4suA7zCmKLLmeo)qj{>Q^Vnq81nV zA;rj1x|aIB+(be@$Wp0{K)skZ@xuQ0Y6R~t*emAN!4uuMdb!3#SwK%mHGjZ`MP1!th&ezUy-lz2(U^=H9an5s z(Xe}P(7Mi=V68Ix;BJGCZF5ps&`p!$I77*t%mN~US$*w`?h}jF3!ts);~mnpN7r@=IH`voc&^h|E@sbl6I6~4aQfmER6R_ z0Cf%73^bSaPvnrQIMMRV>b7P#JCWel*Z>&J%I~4&ex(t)WYIq-1ivE$2GMx{;Zff9M z>Ro4NnL-k*(9>WTRF~`6t%xLk+T{(P;jvmMg6p(UHyYk(@bc{8?_@N+Y@X{%08K>S z6&4nD@>hX4t*2ogy*eh9D_>@Ex*+QlajjxVZR||bTD5(DM^Mt0r4kE`PcAq5nh-c~?r(TR5YYqURgK{T;&* zuq5J8`}dALyJx&R#EYT{h%GItSkKG1NgU&>Omu3P3f=V(RL8JVxHR1C;vS4Nj88I- zENp}T{tse6|HQB+IW&C1%~H4Nw(Vmp5dGMMx9wx%jq5`4@Hiwj8fqGXnx%s!i4q`dJ`_ zBESoWNxME2*!uwy1y7+EajW7}zEig%uH$XVi;_wZN!=U~Uz6RNPi22dGftrl|0``nw%pyhFcnqnn8YM&9&xY3shAW%WJaiaXD-ub-Qng>~K$$$v0)#7ZG&|LZ! z`;2XQdU~{BLce0B74rjQ^h!!CKE>`b#P^08IrS(_X3CYf8HI(Km7Di=D!^5;A7GBB zR#!x77YWQ{-W{G;P&&%+OXPHqfRuVC=tDf7;v0(n z+jQl@2oJePCCas%&HO+EV={SS99gi)NN9{s#}9lMSm~TV2V=z2?w8K|%~y?|RR$x*np8v8RDGRyoqa^$dV-?wC?RB7oLYn!bryRm7 z0v645Jl}c27E7-+jNe6zJYj1@EFrdHC_WkfcF?}wrud42@1x4u>-JA&vH4^(QEx$) zAb1`FDMb|(I~le6=c|Bgy|_Zv8_U-EjAf$+`iQ(E6YK@%U13p}0F zVdLOb1m1TlX=_JTaHO`2s|IFz1#ojK>gdRwceMI{dDz8zyom7?n=zH$yXUIY?X506 z6+kuG6dOW8M0fPG(^CJo;lAV2*a29X>lFdy!vtSHU#*oo@{THAQDC4MgFPDpD|jXZ z?uY<7O$0y?DP6nrL}J8ngp#ofB&yYXdH%8ZCw%h&a1TjB1UPZ005b>yt9FrsM#MHW zH+ugp%j=+`>2;WcD$3z zz5W)YrsF3wME%3n?%AXgnND77rYREO*8vtdzXIZzt<`nxUvZh zC&ov>Ze5~j)e*^zFkiing`~F5k)`nsZPFF-MbyIrCMY=M--c?^T)jiD#N)V%!qf;; z`+EoC#G9eE`wK+yBhKR4o#<$HRyg+{;BZ~F`p0fKzA#b1gViHduF;W|AsHl}vza>B z{Utac5+M07{G{U`>q-eCN?#GY8fCj~KlhFSH1&my|FW8VAt`pXk@~*O&(BXX`96?h zV_}VTPXk8qe@_77jZ;XfB!**Ul^sJ`Bf4 zC(^2iK)`E!d@qDrK2xaXf~9f6(J^9aeV6wdC_TVb)jc_x@U^*VB4_xu*srha{!o&Ps9?})oGekwKakOT#MIh?zU%e*JN*_D zm}E34Z$HNKEwc0FmdSoKv*tsM(rB8e*A`AC(|pnGLDUiM=EzbJx*1h}VZArdC~Naz zV;ZNHOD|~Ngl#EI&Z}(;-?5!~=?6gT?aNpWonlb9k1LBOiA~fl#)hh*dRN6hI;`I~ zCWTl_SEbGOg!KKmS$~j^eZxApFQZ`43lfH%OelE%++K)ye_-=9(&_9%YuQL2@Wp58JUXs*SToHnq;fJYfKBAA}g(8;;Li#O6^BV$Hx4}fjQ zArRjxOF5(R;J>yy+XFY*TNeX&La=_ee?0LO1mFfH7o7C0;T0i`r(BzRr^6Iyj#DNe zQcy$?$k%~|DwD6i%hFnH>PWSjL=QD)#mz73>HR&8Vcz zn;GjqNen(YtPDq)rA8C{< z@mGNNc|AOsZ9lc#hw+u3k^E%Y0N7;F9Zc)eUQeywB({2cN*Wt~Jwv?^`G#%cCCtm< zZgYM77^qc64r=Mh{W*d1mXq5oC_aS1h{URLb;&qcBu997eaN=@k0dP|BtzF~TFox; zBl9C0n8sR;jpwy8Vvv>S}M1Iur~l-#pc-9e!Y~#-q1)kzOV9 zfY2d`CKiyUTh|f>mZRJyCsP$eM}N~Ki&Hqa-*M|icBni5cD-f`#4wJw<1v92Bb3LJ zyVDZks}veTB^m{4jWbjGbv*BaCKx)tGh46~p?ysANA@XE@>QjzP@lnDt~c(oyNRjE z)UD0RL8fL26%}<&iD^n}H^cE6u!d-(e!G$6wDj=c@&4ESWNjndjF3pt1w$kf4W&1x zOR)_NvNo-w!40;xwao*%r%I}-I6y_cr9nk>GEKCXiLYM1jeG}|GP)xLn&zCe4vCqW zBJYLby+mDcvoB1hJZX%ZrY6Bg)w4O32W=cK7FKVN<;5L9i?MjP+ymNSLNQ++KfU*1 zS)A`Rk9R+cRNig!|NAB~E?R0LsHX>v)necv`P-^nLs~&*-#g40AG8Jpn})yNIy^^T zpZEQiB*1GD2WXH*KoXFW3f6L~c`_b`gFkAyE!Z3u4o=zG8Tos-B$K520T9YHv|_!F zm~dsPUlS1)FuC4nLkKW6gKe81K!|8)J9*E)AlY=`bGTMfQtiGt2_yCxh zm>&O!0YcCF28T+7@&F)L(gp;Xv(;pE7 zWWtL!*G;FdGXdzt+VlJwp=*|3OrjHaetWr_Iheu^)R^4Z2oWbKX96BIu{N_nc>>R> z!4Rmtyu1ML%e-)+R8cv7mxypD{^Y|+<__gn-)^*Ck_Aq&g+NGz6TODF4rOa^9DA3y z^3&7(^~II6%*f}D?4ba@Q<|6OuU4fir#h!v?>T4bdc&a65-3hY(Az8W5!C!z*ixHr zgeK}Q3UDpsoOchDcpaX(Q%JTi8E?VD%@|(57B7#oYB(fqV{1!IzcSTf0>G@=!JEIl z&I)H|u^9#OQj)sbL&3R0B!mldW3LT0H4Z-*iB%Ii0P=EwRA^!PQA^{zP>>) z^WCt7>w)8{nN`C5>4LBXbN5+h9})yQXMuisyDu|f;(5YaRYJn_dmqDjF0ouBKS+~#rL`yKpys9?;PJXVvM*qX zyV6M~9XzQq5v-wdEuM>Z9ee__;yHn2kXm~I6noLd0dSFOd3$~OL}!(0T!7dP`<1o3 zyZfESmCo%z;oay$=iPBG)C!GYuqkxDbJe_Wu~QVqZgPbEhba52(p2xt9J-U~wP7bZ z0-!i>Sd^5tra#^2%$CpO7XH}{@ywezz&!lfb?vJZi7*mqbYC9*la`9Mc25?${g2IG zA>!Yf!dX4X1aT$k9sOql=8}tx({w;3Sa)4eXeCJY)PN0cO_?mcy@;Jq24`Xs0^()y z!GZekPjtlqXRoX?e`td#w#~bbmMa!pr$eNw^Ocv1ipD`!d$&6Dmg3Aa=KPCKM{%=v z7PtB)`XviMbM`z1N_u|wOo?cMQc#{;X0^f7@_5Jhzja@*2rwl8ha@0Ao}k6~5Yxm= zfjmOlP7^Y)sEF^x7kB*}0JDnWPMZLck1Svd2ZBSPtbxFnq1XFs}~ydbBFZp9u9zQr`~OTG_s&W52Y%Jd_+gq)|ku;}NrMP6UZBZat9nPpz4#TCVL7xJx)rP=#`^>EA^Y0n+~=BW&SSo;E)f<_U;jp|F{y4@t7sX~WRkYA zBT@HlN(H!GasXSjkivs8Z%1k1*{sxzOP+x{UgkbX;rB}llcWMkc%pQdP33d>S>#q71#g| z`7tfCFaQKZSEOy0D)Xp%?#Zb#zC>iWzxoin`#JOs;2MNN{OS)9_Ew4Hgjf{-{RwYf z`#Daz#BXioS&UPj< zOjll>(iy*!irjw%HB5|GJy-tC)PS6GZ37xt{pn%@gyI`*pPkdWTqrwUzF@+3fABX? zT_A@rfe74S`uqE>UK&r z+vWhOm)NWOmBknny>oDE#uQhTG(o}Ga6ufuI2NOBu3CkLlA2i44rFkP-FqmobuIZy zr>D9m5ik!hLuR)=9(ezY0qjQ8Sr46n)hu9ET03jfjV}mo@D4(g`h)jW0QUTV2K!(0 zyWng3Tl1zydC^PX+1c;z*5NlzPp`Qfi-oV0a_@2N4wm*hK#ydMN}q(hab{IK7h8zf zlSHk5`xm}-H#aB4L6umTTHZK+lVOwRpZi^kzEyLKC4J7(l#}yxmcZHK0lQj7t?q$6+E3iU&iCoDR0 zzb5dH#B^mE()8Lq2?)b&k~e}hLo^>FKL~l3NY1c0Y@iW3kywoXZqM)i-dORA-U{FQ zxz%pi2lfJ_62~sj$187|!6vHti{+}SjKPti;?$vqx-BL%3$A>RXkIq1{AJLn%vbco z@u0}2#bSBB#h6oIcXqCRGI#aRvob2FAdM0`!u3|E7#P$%gsaoH6&I%y(J?UWwfp=% zmy?%Q2CGh|`aZ9~meI9h_v^?p@V~uM8}@I>{D2n9{qHWYA&oqjHpBi<->vMs{{v5T z#&thjFD>&ywVJ0UQJr8QV>ya7m!f^kLxo1f&;9heWO~!RZ1eR#o8X`^afJfnEvk`( zRg@WBK7W{+7jbO?Hyh~V^{Q?N^xsqo84hu82VO%VJ}JT@!MP(8Ap1!_uEIdTRPn-9 z2yRnz88u{K^pp&Ph*Mcl=7K$(_7U7|UKiX})i(R*(cA zvISU#Om3bxPPvQoJ`8*s5*D6%Egqj2Zj?jvJ$fC-`f5`L>VnDEt>y=0!)U$220l#^mMa_j*C0I z(rJm05RIq4X8ApK^b-JL(7$zdj;reF6>WNiHal#IJ2^S6PePfY0+W2?;)JhU+4jnlo`J?P@fH{-whr1T$lxE=FPyEYLvsNO%N(0U^f zd>(poiUi)D70|hOgUadHW$;dYyB=0;zH^&=-*^aJqoyVYFxre~eCu1t^e}C`UI0|7 zW$v$1S(cwoO>+%Lub;IN8tN|94TmMwY$c?*n+(z=iE+jZ9PY2p1<&%TDy6uVTh3|~ zI2O%EMRc5@5SC$&%k|McLKD~1q_R%U0Z+s@m=a*k@PRuHuKmiTfawvANR&Sij;P`- z&OMRF7Y_F4w+uE*5GpEK>!*#TC$>&T9l~mm9Gi%fZT1TvKIkfTZ;kLCPs6`8YDyxa z4aJ)1=IYUTgNO;ZMt&m(n!M*t61yL}u9O4?iFKlgX@UJf#Mo5NU%IRQelRzK^1)Tu zc1MyNNK9@Nz{GWD8f(5A!P5T`xIIv4zjY#vHCj3pt}gik5xblX2(oSla)8=g(&h6w zy)z*3#8y%YClS(aD7c?jj4MhMLdb}1Pjh%~F4&sQIVzNum=oat^W*xQ=vbrY1%slb zuy#n4-kDF(;~@SJDiqo4c9iNwLf{QWE}MJwfi;3nb5%ZvSHuRG)$-?!cV!Gd>YG2S zt}LSCy%BcPo9*9Y0M+_`V*tQ#GwK~%m*~-Sm^xat-!!4x|AJ|(sc7W%rG@sR+w+E$ zX&Bgmi@;&g7b;m5c=DQbk_ZZX(VCk`c8BG7^Qk(b>?ptY0f#xPsj5&5QbY@-+w@BT zme!uU78VvRaCENO5x;)zJby7$F+2&!aFUGRx__0fPIitlQFTQhu}NPLUiwqz{NKfA z=Wo0T?@AA77I4U`IJnx!M1vD(A8a+=SQF>}_Hm-VAE&gexEXUWP>*)WuchunE;Hdi@Mm#<`ng>jJGcWgN)HF0& zuo4$wY)1VMm`inaN^jli>E8DN2H}@ZDg86>r3RG?7r_`rvw%!}YyqGY9_?K37hc^) zCWFG>@BfQ@OFMttaJkyZDqbewan1zJ=aJ|n3YlbO!QVtWowYS^a8dKA%MvB>S@}Bl zZJ}o$E86Av_llsqynVYLHd2VlDhFlL*=+NODU7NVc=o^ z86&MDe$abitelY%3g0{E>Ll9Wjg&#V@%PY=j(>WJD3JiofM7m4IO6I3DyJCWF_j86 zl?Gp1D?ny2UY5`Lx6_I_KQ>T~&){E6^t;dy z-TpoD<_?s~*`OsuUshFhguzD-RoX4!?Ft2iU9*{T)6%qTn*fs2@fHFue0%!Ws;|3S z{y6bEV0m>gj`#p#BJ#&z3!m4jbKKbbJkIfqHjEE<)< zbQl8O!ZT3?#9GxGSCLr}2!M<>MCdd+qKwSlXGENhSv0P0=mcUQ>_KfV8_fLjSbyO=TlG5@?D|4?f}pve!ft-c!q36Vi7SiKKXaq*`lo1! zMC}ll9PB=QlVNa#iA|Y=jug5FiVLTN5y=A_sAz-dar(vWY^6VX8Z~ynRY;8dzXN&u z;dL(OWBltAK)b0z4E)HgXqd+c!S8ht%uyKZ6YS06^#bZ)+uGV1T~CzLfg1GC@bK%8 zX}v<^pXpC$Pg?Iya^7K;Ghk8TbAWGf8TiQoo+AFpejdgQz%S-n#=UW?%?9&D94s4) z>>xZJi^gJ{dF=W0bFb^i4GC&C{^J7wFS=I^QV5zQ_zu!3%w+hN`;+$|8Q36k5ZZ|R zEWpL0p`=v*6-n>@ezc!i_#XttTO0TJMau;J)#mYW;D}i3Yc$JX_r-E|Fyw03FT4}L zT6)8o_QS4x0<|oq-6*d(&IX!&{{;{&pX`>^yX3Hcu3@;LJzlPjEegR*{GLW`vcxOF zux_^8O&V(BiD*-=5)Sh3=KEa2;+)xMp*SSI{BI+O#L|xEaj`1$AKUZ3*T3TEkKWf1;`Ft>cX;iol>OViF+9mT;&Q9nJet?cjLt8~z zLq7aCe;}JZ1mUg~TeJ{zIfnq%@>7>ao1HD3*N>fkc&mT3=X+=%lZy|)H-k*Zj_AI8}xIa_(uoVp0MglP~^ooH%nTpYl z8O3qF9Vueqi{|zHng7+Eg&qU#A=rpx8mSZbsJP}fF(JYpXRp9dv=qb8xdBq%Dp*lH zd-VVqE-}Do6VUSRA5MY3R(dUCGZ zNAdE0P|`Vi>S<|dx_jd@Z|0R)76C5m zAa4grTEVw1z^x7>K+u2Tu%#bPb_rE2Iw%8`WoR%UT&LQY6S<9!7dyR+xXsOLzwEGO zeBQ&InVYej)a)T{Ur+R!Wv9;Rk!!%UbRHxoSGB#3TtP7vOcn5UaeSu>iHZ2lv}K1> zLrF_04Q<3`rRsjxt`J-uYX{&OJgrN-p|n&hQ(UyH}-zr}ZSBHRe&`NUN8A=I}a2HT0r*sB0-nxj1B2 z3_s}NvOA;yr$v#LEeyJiAuIPK>qg{d%jzx`Ms`m+J=s@dSiJKR%wMo&ca&;{tE(4` zm`%6V?UfxnqlEkI9~{+*adVQV#tE5;VWf3+{>f!qk@jdDSP5_M3q`2I$H$eO87Me0 zxcZ2ul*=59UNuh|uN(g6p4yRl{rKOZh;Mhx?DVkDbM+Qo#Z7(nMI*ZlZboe;>g>l% zHlL3bAo}2kqY@Cv0n-dL+@jGy7~R){6{~dvi#RUu_UQqq;8(c7wbn1a1ks|A2??l^ zVZjUEuuvq1BK5x6*eXS*&pW{jyt@*}{msl}{HL!RHTV_T91;@pzlwLlB^(?ayK2i< zm)@7(mzyJ@4b{!bfO<4>m6n+`aU%WqU-PTY^}yzM|Fdpn9WV!nx`%CO z8>edQI=;+8uL#G4MLXl25RHly7iE0)H7+t)Jq4LtFTa_Cmpi4oDQX#L%ciX;gL*y`J?^#96Y{})-O&KQz_-#3DkEZhn8 zzY^?~g&=hkZ@}73>PkTZ?P`<#!2>*>c}Sne_TxpTV#0M<$7l*lN-Zky#Ib0u2GpXN zx|t*o{y6@(W09Bsix#efLjL9N^BmIv8?V)LB9qHyxyjN+gKO1*G9ug@(}lXB!ROK5 z-J`F_V{Rgv6T)cZ;o`hp3>BIFuP@_$idr=^QLsI1tzh3phs%gOX4!Js{CxcX^zT%@ zKrpcPvg-EiDYOsS;OX<|p*QX37N#Dok*P6T^=Nd5=6(uFa;LV0`y|jy0TSW-BmdulG1F5$n*u)0Ut#FAjB~MRss#SEow*CXa zI{Gp}yR>mr&mZu146EJ!MsU9tI|ob-R)AoaRCYFIRA{K3*59FwWyQI3nFgXS@v4NnRgXT^$gHJfauZWYj zoiMysm=gVIO{Tp3p)uD@Yad!ROw z5}Jit{(jAiy?WttfREBGARr(rA|@9};O)14stfptyOUIKyZ1%@q3oK#7st#bcKvT@ z84koxals?u~Idua%g{D}VxCz3zOl-=T1{mAbn6#sfh96+DNMg?fgB?Q0oU)>hpyej} zoEvw(t*c|V#U3RX9)D4M3MvW}IX46XKKGMlP&?4|7}|&cHqA?JvA45q!_-*u@Nph_V~)oQ|ueub5eYE-e7Lg)m?b-UTBOw z%tUzql_lEPR4o!WI@Zj^Y?}FI-BBl5YHIHze0_W zu<|Cn`M!TYYpNLH*1?+c@BTnX#~-xY1^@;{ABg9vj=QEK;Qu|Q%s77S(SslMoj2mZ zeu3b8qK%SxTamIzs7erpsloS|6_{SR{dVA5w0x4GR#=63=rDF;0pb?7@ZY{)=SN2} zfT|A6&i#;jO3QA`wUV)T5ZWh+55h|{kHI_G{a^);Ibtj`phoz~LpFp3XL3rpaXp&$ z{(LGfuC!v=tbR;<(_%7}g_%&~{1#!@A_Tle;O5#6l1WPaZ775IGc+y{{#wD0KLpJ} zzC5%Sh{kf}fxT;Wo$L85)xKW{CO3^HGU|3p*7V$jp%xFh0;Ja2bAY#ItcmUAmA=N! z3<7zs=?rnVB zx&j_cTpAmtqu@Ac8sq`bwttum>PilESI`#^KB%EG6G#MM#5!HavOd<}1iGP9;kZ;9 zi_C3kh_kEBj}=75#;8`sYS^Q_iC;TzaPA#T8I9+QE*4X>lX^TWs3WA4bE`TSSZYj( z45MF?xnN)>(_?VNX-3O36$4`-U<_)vl5JM}348@YC{DFwfsN7HzFCai(Ye}BZYMqM z-Sy1j7xC)H@V(uuD+wnSCv-poUi52^Wz=u~%QD3&&uH-noN_m2BO(GuKPfjFru}01 zkeKRS4#+*7A$mLap9b~S8V^CAelDzs(TP@7>p}7{%G1#WO~Pfb|Jq$KutBC<)2pcz;>?lyY(+qh_!xm(}8vs zz%l*=WyIBR=-4(cN@q8^ovBL$yIGX@9o^-{1!2gSZ>WNXzk-J_Sj4TYPSCfnrT`|j z(HZ#}((CM&z)uq?6qS*Sc+iugkCR!Qd2D(kSj)%P58Iiz*huqeg7}@xvx{dVzb^lC z^mcKlvacud63~dV3A>(F3G7b z=@k?xudX(svyhT%o8YrKuQ^i{wAHkb2SJEkbI|S}sZF24kRnzc{f6j05Ne;T_cqdQ ztqTI@)Eb=sF%;{E5LP(;{cSv!O#@Z*H)>{!76nFs`|1VRSLr>|0( zwlU=f-5hba_nx_WRW+8b{gj=-_&^Kcf48RMK3bKpc=Idf3qw_~s65FPjw9&jd zpF0^bxpvwMI@E!PTR;1-qKDOTtz)-j!AMqv4s-4MvOEzHQMK9qeN8|xDysN2SsZUU zr5tx6md{Us1l`i9oQ=-c|6neipyb!}@tbR`8)(I)KGBe;ET4qacRKK+}r z96N{&1Fo{WbSH?4Si^7u)f#Y7!#4NJW_}N>`a@ZD*b;WL`LT@ixdpCdT0gueq)0M6 zxljGtk#YpBvy6WAVFStc`347?%Xv@2q)(rM#o`t4@@|f7?c;yUM3#RN_6$jfGK$P{ z8NjD)_YD=H2BLa-X7}yLFi4ixmb<1^tE=3MdEc#OO-o{5(V{b%iHS>}fh=m~1P~2( zupGX`mC@ErP{c!+=usLu-3jjz2SNFr08^3^0A&(q^?OuCf#c=;4KCKV~CI#iDdJFR*7Fkoxb9&CPgtOV;Jsjgl;u^Z&m zWz@hHjNJoZ4?tI+A6H3fX<`75B|zD;bx{G`o*cB>h%rM>lb{pmxm`1>Rb>J;Xm1AEehBae2I zw^43kYZBwW!M1+6)TorMi(G)50b}t+?bemwyKTqb^GrD1Nshy3Vz%cl?7wW@G^TB2 z3eGUcttg%a#Jvx;b2#@n9Slg;Sl~n^!DaJ?=X<`|+Wn~Xm0ser+I zDGTtUPCwrrc?IFKklNRBl#B>E<{B~>J8ycQF;uI5W?lLE)N+hnz`(&3nT;jS0h~#l z%c#Ir=h{i3c`m!m%&(e-9R!8K&?wzSXSqV;PTmngm*PA;H`hyre_D_-=iD8Q$B< z9TQ98@ef=`kJaDy_y0%&?9za1{7~C#GClIFFoU8MP0YHM*xvTz%}YK&X)!ROqN1WC zD$3dWvh7nXcvTjdrim<8V7o@JM+zS;7aVH4hla&x*rs?z6!`A-xP70fJp#T+{&f6S zp{r4~r_^;lNfD2wI*^V`ANts}ZeKdhj}WqldY+z{EP?3K2$3za%RoVxi&ZAD+vBV< zcAlL;U5^AnF>?TG;J@YX!r~&*!yyPJBD%O(oDkR>QCPFKX9NX-E$mHf^k^MpILe4{ z#^JDu%RNlZ5`A1l19;NExDsU(FWS5mPJa9r;B{<&y8-0|@iS~dBKCT(7Akiwup`eR zRUcR-03t56kMxgEB6V&}nM?%tDjZ-(x@$Mm7WNY0a=y|~nyEh+I{^?t zLWw2vr>jA>7%pUU$}2TA65?aD2cX2ROL`tNpKntJ znr}W{O+%wrHBS5_6AY#3@$o3Oe|TID8z)$MS#sY|3a?Ci<-%~I{;G>@aP|b^FLkZw z*y32*H}SAo6KdAz`ee4j;m22UB`PJFfdZtiw|oq;f^z zM|AG$p4`kNz{5`gyr+hlvG+G_)PLRphPM76XbsCp9hs(YCW_pruD)aeKPn z-^UFPrcjBHX1dW`TVk!e=IZ3n>hjX!M}eoGHdz}^M+E5TsGNUwjy`(3v{iHlnpOxu zR+o+gJ|6>42I~^(s|~+4wWL10+%4q8!kt6j9W^X_`>d?Fo(O+U>l(L32^k}@L*D^9 z&zp6uRvXLK`4R11V`eS|c?lEh!GeP<0TevJMvJ`T|CPG%asAIk`;GPx~)fT2#6OrpMXbe(4Db30c~MUp2Le>rsfBY?c^lnV1SfqcN3i zWzV;58_>-8^c{Y`!?ol#2#(9^4-=lS6r3>Tl|4E zjN&X1!oyeF@aFTJP{v#jP>DQH;4jjjd^8k!7hY6Fb!G7k0zi&L7Yf$x zQ#@{RdbcdoQEA)Wvvdf{0$fKXOg4=>)F-W4yD2KFCb@P9U7f2-R??x*79HYlnELM*Bx?tbo=rt|r-vNF8&NT9HchWvmB$~R_f2sOC1Srf=0Kji%Y<$P^ zgZDpvRI^3TE9O_c2f`bzZ-b?AIPm|60<+y}W&I~li z+PN+0{!y=bz*;xK7b#@ar~V!F#2G?xT;pU;ysMaPyf=(+PSATJL<4Ke!(G^8f?IVZ zl9@fgZ5_)fsXnuZiM=~Y+)f%UG0o7jE=*y~;`lUeR3#b#?X>Y@?aZg`bLUbt9Kd}e zvxPr*w3nxUfJOxB9dlSrpaP)R%QRj4v?uSsB3x{7Z}{GWfwYJ2zmPxzBn`XCs}r?ps(3oEs7Y+V0I$SX}E^(^M;baU+-l8 z@g*D<(cQEB53eXG1d*{}*Syc&9pg2NbFwo#BmWr8B+PCquoKDyQbo( zNNz=^^J3o^`S@fO*1_jGrhASt6(gY0S3+y~J+G0m4qOlS0xyOpmX_jJ#p8+-W>INd zH940v7Cd+x!AAaKhm}qOu2R5B%GkF1&28^L+~{NW#=0MPOCY8b)*X}8*IUXeQJEC2 zgx*5f#M6Jco9-Uq)n4bXpvLdgl9K!ULr+!o)!r2cyx*7FA{Kr7~3Ftl|17Xt$a_t2eI%fQpg03(rg1mjv>Uf#0G zN?D(2FN8=w@Mb%U75;fP}>1h#_alCM+bF++k6AO1IQ)JIkzziyAIdmwt^tkA#@; z&8eKP{%K1%t+`8;3xoJUA0%I7)#<1Xr3%~-k0QglA5n6103g;GYI>#{SeN+p2s(kz zmj^j*Z`=`;0yYr^`c8b0)$+C+lGq)_q1RX<0eQfr+xK1zaG-C>H7y~+p=LUt?^<8~ z0zspB{|Sp0I^%Um5na8GjIN=1ZEQ_wSd8m+U7GfMv%OU|7&lVDJ^U^&B?SvdW=hCy z(rgjjw9~>bu-gw2jZZItjL^Sr!s~#Hl>+EvjKJ|M#mLSudl}bt7 z8d|c-dEDjCAv|^ILHtvP@0$r^Nk5}Z^vsS`7zMyEj=Z&1*5 zc}f#j2%|sv{fD!8I}ln$qv{-Y53Hy(4CjRT9y*GHV>~J8;*!DDsagdVzyA|gDD@ih zh@6)<tEA-h6z(l6z@%i>i z$Y;2|0{qutYdgjRkc8FjFyt^_;)K4XrT#FX{V&RZU(w|9%GgYM!)0VCq5-6o&a(PW zxm*@y0sokJ+&`06awwMYawwbvHcjj-F2ClrmQzy1M@O}R^zd3Zn61%7Diu*rh)3`+ zON$-*8S6g4F+HG4HetynLIX0-5;P#TOpME+Qp3f9^dL~WBF~smtm|~B_J4J3D#j^>kx6N|7`pWzDf=&M;f_OBp zodNY6ktxEii0rlt`y2OqJz9Up#>^;HaVpW971OqbqS7i}h{X1r?Acr{)sxvkFvi+D)jxyN+gX8QoohXbXA%5h@ub>o!V zL2T_8G12g`9^Lbab)xcVjuw0_Z>(oSnd-%=HBS(Ro0z0!t!q|pN*G_Z7})SV+W~_- zr0)GKHn|m+o-0cCn=_^Vi_xDLtJp}OSlRd{VF+lmf%gWT)98!!t#s#UAt2Ghsv9e# z-Xa~y{HcMK8%I)?K|BA4Oh_1eR^1`r-_UtQe*TNZ%T`}DeCQj8d^^fUhZ`E}a%;=w z4QSPj1QM2-!fxDWa7XQ|Kr%47+vzlx1u82eBf7)B0}|4wg$}L)|IhqS*jS}RQbI@! z#`zZ>AUaOP`#v2U0goW+d5i)53VZ=7_O<3#U4%G|In`VPd{3g?UE?Ao8vtzaa5W|3?40l3rO{=CZiSx!wJF=# z{C$q`e16PWY&$f{QZhDW*zmQeN~a~)X31W;%F1LUk;5*#rlh%qENZUcO|Z4O#fh#IP0c zM8@;=bw6wtLjnD7 zDP_xS--gBbrlKMs^l>YI2uN(8GNyS(?i0=i-pG(Tfjzd>Z|Insk^}mm@$tVhmL*lE zZ#iTr#f11BqCg|3{)T!i#LTS=&*c6V8vF?huR;wm7c>FZY*gHq4SPg8(`8mo_&rMB zS2>$F+vaw~h9uOI`j7{1W+ekMnnFRaN}z3RiN} zA(kO9M2FPJ;+ti98XB8hpNvzB)%h(ySoK;&LSD@*kx0`0X9VZ(Zmj{aUka-5=nx0U z1_o8K{`_GGLmUtbr+KltY%h_p2Wa-Go+pC^VYR-n3GA3`=Tm&SlD+D>&ZC1+0VR*a z7BcZeFpsE3)|)dtfBm!mvtAMgoU=%YqyYs`stwNgsob9CJ!;u0C3&45z&Z17G3K&q zQi^k6SZ_enRYYz{FKfh3!~U4MCO)NMqQ{ZIiRz+&P&nr)Ii+s398-|pL!-6ZSLfaF z!#g!k`;s7u#mKCkxr|KjicRnwv}P6uR`E(YU&{a;6Wr1v|Lg0X>!Ro1G!84afWs|K zls!4t6Q}a>9|=`>96~i^?s@h9LZhThe6Dd8JkYrgf(7oHxV=#Q86{m$yZ{SrA8=tC z;+y>SXFq~m3M+M~t+G;g{V_$~yG(*(l2=~uQcO~k*!}!(vN{{3WPf?wtYn4>ZLb_D zCjArUSjs1{0NeS*&B!b0W+Z1DXw2ZYM$#H!cJd;G=y?Q47U-z!p5rgZ@NADU&%fx@ zD;+%$XMXk(eD!+;#WTmmZl3IYJ-WHn>nO@fsLox)<eQu}+C%uPja=d;d#pU?QC z1Vv1q{^Mkc-IdHRRW-C!3Q(jL1(SiHNb{3>`g9Owk#39MeFA0yU%?zr1$B>tJTcu1&38MvLz+-&3sHd*O?)x`=0ENO?SijC5QO{d@JszYg; zlomKK7#e@)u7nhNMSGMp2!j%yIP$Bj`t6n2V(- zHyI=ENi_wf>TaKDr@+A^BPq&DLQY3yB_$=)^6#!*zJ9LZ!Q39U`2hWK#{BQ==80n` zDEFpsB49GJ&GYSsDpc}j*)yxlD1nFnIDYRHxapQ{-nAk_dsiJV6qLkr*}gjYB6#t^5aAp)oMy`U$&gFU{&8~l}cc+BG;Yp2oBB7h6VMk*;MfG z11jBc_Rj?`?Vr{>O@a&`;Y2+{;)!iM$E_FkY0w>@M93GcZ&hibu#wTDA|ezapr4Hb zCJ7%sRu-046jgNte{LJzbM2g7eKp*X`{V7Pr9=`XL#Jp1E2!xBC4Ye6@`q^pr6AQ8 zR7grc`E=Ld5j+WnNO@+yl1xusUYvZg=NX2t8|Qb@!f${^{WXsFmiYsqnC}kbIjBN~ z3IOdw13H*3*{Cjdzz2ieG#z%{%F4=GB{dwde!{He&2klyal4(8AG~}%jcna)2O~-I z;HhX_NqPdw4Ke!XSRVV7n^POR-s@{clIlw5*V{21;c+rjP?cuOn%#0rhr;dtE~<%w ztdc%Bb4-)X5o2R_ds7WkkyY!FppXy_CWC$mP@62ntNHcA`1G}Vs>%n)baSa;inri! zLtv)Bf&(^k)yQ)$l}ubg)liYR`{TtXe$fyb;qbOzfclQ!#ra`$b!a~Ow%%D(Tuhoe zq1~T;)mUIcN|{v-2RHRRbZH_cmPWrUEV;kR(MjNz3L; zP%MeVlwgR|6=^9Gv#dg}VL|_YF96j8L(%W#7srN)T@+Y*R7Se;f7aVDH!eOA^Wv&H zArm?+MhCYD^)n2g_Iae*7!kiBNA43ZMa7r$>QTbN7N}7V`;q7ZdIPe3@#&J8h64L( z-Jr>4+SPbEug}r2J$8_NZ&z$Q0gbbkjm{vRQ|tiDc&6p^i>_l|+tY^{&?nTgFwS%I``w0m|%jx?++YJsJ7fMnygMI@-*|mb(NLMqEzAV zIP^Vi_$$k72Niqf`nryexkssQH@{>-U@fj@&XiD=K&|t> z$E8n|HtXor)g!&-t`aOH@6Q%Ocy|VDs_ZbYyFE^OLgsGt*^C%*ZIP zy`;^u;P5$KxNPJlbC0`w_=;!oo|#T{fsCD9jrZdFyrf_pd5Uwpkgfb7>H86|Sulx)$4s8~eKi#ccfWA@@FI)PXdqb8;z7pWKxFE9!GuCO zjo$tu0FOV7z#-QTH3S`1dm4T(DSZ+2$e$fE#)KR5g~KN_Z2Yn)DvClxT<)iQ^ZsQ4^hOS`Ok2N0za62?A zd{9^RJzlN%EFe`?wkAR|@`uJ@$!IF7t5G*-e*7#hNsZiiy=oLhsqd7}VznN$8$&-vBwYLzl@cutjr!n?-F=HY zHa7_0%1B&+ZZuOW376Cw760qsZ`#Rq6 z^eXTX>t1$&Xk$ki4|gx{ZATgXse~v1)o4B4nhat0c8qD{XWzS7ti+Hgi6o9sW(}+p zY}9g-c+ZC)^UbV`C`Vj9F90$U3q=3AAhpm)M*PJeF#{Rt20x41iEG1h_chD93a}aD zvU0nl+TWr6aVyJLKonoK9abbyhuP%I{ zXw&4j;kJLDFqrNhLDqHKxQBs5%&S4)nTc%md<$(oDW+5MbcECQd@z#D{(+@t-Kbi@ z6sw-`(bdxh4({V7q~*nhi20`v#aM#fT2V^{`tzscy7R@K2>5!Lgu_Jw9;Gr&l&0?7 ze_GMAv&Lq6=MqLjG>o_$gs^piyI>v*`1<-A>xTB{V<}8nb?W^~86_Op<@+qoed`?_ zar?=&iNRWV=V$m_s3jX6kAO1GkZrv%e7wv8j-~7Pb<^v}j>HC_1}a~;pH;QcMEhc7 zdmUOS;Z2R7ljw^#NT+iNuXD=A1=QkRol88O8GPb?0*DW>-6MdIQb6hoM==spW8(R6 zHzgopYz+4DctutC+tErHwtw&YzypvW5l&{Urf!!>9m3%bRI@)hAvnlpZ>gp2yAjla zY0W*l;UVElBo~5r%L2T3;KcE-cU_NB&|I~V-=VzCHQrmb+xh7A>d^^Klayc=dK{PwI_BFqsKA+9ip-(t85A}J|DzefT%AM zQ=fpMWr!ZVmUyU`Gk4W+jdH|9v1hY(L2a*M#UQ_M zyy!5Ah~8f)*P&JAreL$|80q%#y7i|ji`exz7g8dgO zQEz~=eZuX@Oazqis$~Rpy0HehZU9}?VTo%mcmvqYEBU>u<-@RGm>Be?7Qiw_uLv67zUG1%FZ-I zPp{zE{MM=a*N8aW5Gk^$!~$r_oqc0hti$LP>RMmo18S%5wC2B?td$$+xgaohfLJT_ zI@J9h;E~eqCU+8?O7kQ%;l+n!TPN8UEgOytnXXusBT@@=@)jLE*ozdVZ6nsF!iHju zRChRmQcBY3Lg^Ao7J7S-*`dOBErg43fYyI$>7ccF}z~Y z*eJ#8FkJ!U8{<{40Ddw)DTzE3{rpps37?GtyWCWangb%iIm}ls66Nsz`-5(_l|r#r zV}HQFz`*``D$`(HNx>jApEG&p0{a51hU<+jAyaC}pVTDkSvK?h7J0)ugo%oly+v5 zJ&E=^<}ey0)-3S3%hPGO4D;>^V~!jqZL+K<9x$H^w__cg+$(7Lg$hBgI+o0UCak8b z8-?>P|Ip3MlAW7d8?)7tT|Dil{M34`Q1I{U>{zKp`@zjFuKy~X6;)NK!5Z(y2+guD zF~|c!A1velzNv?qq-l)swgNbu2&}k;I(Ur}+LN7MfXT8=tlh>}o!y=On*ZS7F!JEi z)?Cycu=nq#nUQ*^FeLgSvWuTo5%Snqx6!iUo{xyll+mJWQg;KVw?;F3TfoVc)P(`7{Lw_MO2euFKj*Y}izMV~Y!h?5gaI2B z!SwvWW4qx3quR9A`n!( z6)k1h^>smnw$xP1=QeqfII2;K3gMGLR=nCX_^=Y1x7YW_F=p3eWLM=8JNNP50TU!| zE_^)6VIi@Pm;H5EQD1p{9_VizSp+nB4Y=hU;0pVbLvcZ_PIi0j1xBz}j1>G1mIy={Iof)rem z1#b4!v-zJb+;;hp8r7Ag-fVC0l`yc*y;H_vF2aTg^1a7*Q(`IdWluL9W`C!Y_Is^{ zh7ndG9I~ReOJP7KYXp9*Jb8uz)qgWoc^Ih6O2v&c(8)sNZN>&XKp^5wm}9Q|Pzfd! zms(eFDr_?eYK2}=V+V&!ocudT_#4X-%^dOp-q`y3Z=ka!g?*va7JyLl*t(w}Q%GB` zj6)18mh1jT)GFUE^>mBezH9d0j8f$ot~C=V53<;=d)6Uu`9125rjZ>-7PMRdrzZyN zf8!2^Fg}eItI2SRq@sFJ8IwdAia!!w+|vB0rB! z8R~UPKf#TR(hzk&WNvO;?DlLfmts{-;7@Q+uAyAYH~k(7MlVENRR z7UYgwwPx8yGTFrQsngG0!__kWt;~I$JrTkwO!{i_K^CXc8RPasz{KawZr#PsBk&BQ z%-h2QF>?vNx@Vvxan4WNH@wLB1$iD-K2Sn+sp&1#;wy?;FI{#;71Gd6;6_bu%pw1a zDA#^!aafMJHe>|D|E>;EPV{!+U&_cJVQPzy-xnAWjVN?hD7mAHNeH!9BQHR1+9>gA zZ&j*n>iGD0rkJh~ydYpF(xTs`cvPFOXYLWxZVCTTsS^>(R${oKU=#0Fi*C1Cl)DJF zSFRJYVr_-kvW;z+z$j8tj9e6(ElxvKpV+oL zwmY_M+fK)3cWm3XZQHhObkZ^Ic|Y90aH{I;U2Dxb1_^NRcX8i}9E6iHAMM(;?L&%+ zhmi7kw;qiHSIO`fs$jQtjaJWnJS?HpQ*diostutu^3FZ%_Z4pkqNh{bzGytWvN*|P<=dedYH4C&% z({t#@rWTCz;q1(JrnzpGAN*(Lr>EP^#yHR4qqsnS`>RTWMdoiE6DtX2Zp?SZPI0E&RY5 zky@kSI2>UVLG4A7e^D{M_kj?F9GP<5G@;^*hGOL5AXat zL0hc-H(5hJB}I>#y1eaS#vl@vSEtSV48TS~8;`AKmHZf_XyK13-ok>1mtJ+~qhVu5AZ@4rl!H=AxAClxdwIyjmH! zMo5PNv@+|(mS_*A;yED|2Lo|tXXN@l1W%sG&~fBKqX1?k+nMm!MUET#I-fc^HWI$ z!fvm}qv0V@qtlsfWe7Ypv`ELp5oykqs=edS!Z@bdMYB}B2w*103&1ZE0!4#7s877% zz}^!w@6x2q;fj-U{6ytZSC_kuk13t4j<}+`9gV9th}ELE@-!W7q}5byu&qnSIHC#5 zN2*pFT$`7k1G3p&GbG!KirqOIr?0!mw$k864Ifgbm0$7-p%R&1$%l{E31cARptz65 zJpTmCE~Wesz?R zJUCAMj!Z~k85*V-ylEntLXETz+B=}F8RYBj#9+?%^Z1NTz1xrC;#4%-`aQ7i`Qo!- zd&S7mkI=dsabt~ET}2_~$lj(5LR=l6GEeaN_D@vOg=)jpMYL7Fe2lR=AuDjN4_Z`g z*jRD*CJege2Lm@~Hn*!NQ`e{c%~5dUe{_kn)7zApJvDh)rU3p{+v^C|1PJ0Tpe2t^ z%~3w^{x2IJP8q$!EFD&tkqjP!YwvQ+$`Kz)_%7Q}cLKyyR88-^d7suy4wg;Il4q~i z;^~9orz}i8QjF9FHWQ__@F{Q-IWf`E%ieJrbJ}zLw0%YUjSU7L$LgKbIF#j8lQdxG zo6arIe+Hz74lxVfGVnPy5bEqL2HlYM-;LqCIZml9AfnT$KWCfl`I?!^b-5Hswx4%l%u1`Zs&g>@I<{Fz5D*H9)*c0;(r_K zjq9}GVXx6M0hiBa)9l!@(e<47Xo7Lit7yXp0jYP;UXH3eF04BEO~!Y-)`FxV`@yzR zT0}B{8m65}q6K0EZH*Q)S<{K1?fpCy%Gx`m^6iCe-~9z|J7IjW-2w&qg!lXXYgbW} za+pMQ8yhO|Y{=l^RCEg3^Hjo$$zO4A>g4N z1O0OBrqB1qCuzt3%>L0-@;6`d@2~J}#y5*_?=sy^S}5ws3)X>&KqB=|X|V>Eg$9Sd z-FV;*9v*Wv9g&a>)>}PL94a*1-L}+@>lWVIJwGIV*gWYZ>mTx4&KJ|ilaiLoe4k|6 ziCH=3t8QymF{mzUE9sggZsq=v!I)Uu9T-R??qTp|mYzf$r%+Y-C86sAMK&x}hEy5S zJcGp02_f&Q1hA@&EOqG?-drWrXY9lVXwVaDglq+Ao9vPU@+|SH5V7zvPl&m;naK*z zgcT=g@*J{wY@v-4(2(aTV2y5$S`&;Zg$Jc(s={vLLS$;&ap$w&;C@x!YFs&K^t1b; zuHqXe5VT0K@M`)IDrp%E9<0c+KqsNJ=r&(a%4T_ldmOICMAA1?T?)tIRnB6Qob>wa zz~jxTio;Yotz#v7sgn65QMTWYWDZ@9#9g#G9+JhzaJwFk#sYoW*EC+w2-^UTZI?oH z1<$ecS)el%cP4FtjD?D}GaSC&_Zbv|8_`xYAP`bpX)A7z4Z!FIKZ zvllbc(qf^G&b8~Ynt{Wz*l#ze`K8bp5mkk6pmGIBJ%T|Zh!z7smOqdxG_06;<)f)~s;7i4!pLNM)s%r_ngx?pK&jQzN2KGJ zq59p~g2+>6D)ugw#X4LDr(u-dI3C@>iPuieobbF@uMwyh9%I>CO2R953>9F1Aw=lm z3fBsGvdp=BqR%x7+N_~REgx%6AMR;dc(`0|AfP_^TsNA*~!0O$=VKFUhp3paON6-0X)9>#dDLUFHzVMW`35gl3US!1f7p zWnop9=-+T_JxH*|v}LBIweg!}|My5IB{i-6a=|jEa4>}~yMgtX1Bz<3+sbMhrQ7N_ z4hJ}xqMAa{Tiw5bVth6cF5U*VTx5KV-&#;LYGl%6v~CDG1@nQn>iG-O=3#gC{_mNc zCPL9ip-i@N^t@ivX=?&rk5uu@KRk&wo84cJQ)N)Vb~%g&$p5xK@-K+=$FX3EgvhLe z7>~x#0T*P+2+9}z8{gxJzr~dD6KB=z|GbOQx#}9hyS**B8wS_g4?v1FKis-ex!x!Xct(-WN|N;N+<1e-@bKN-umMM-XK~wsL`tMYN=4J|@ zxzsY_1SA>o@SLJ8*Hme+E|W;1v=FJDKw}PO1nQke6R#wt)<9=q?gIrqTGB>r8ANmiO zFFMbiR(slCJ@ABxhL_dAdZ5z4s+k2p{e5J`w4ik@NFEOcl_=*a_M)ApLaof7RzBEh zU9{r&kpVw{` z!UYl?@Q0BqZSm?nD1>8B?be1hcHnsVHRwo=*NK5|^-2XH8o?dO#U;Y$d7A||s-C3; zmhHAXq{HHNm{oeCO_SPf9vK!oN0{hz3iN=e!TCzx7GBGm@$v;g2n+%cV*W1|m0YOO z?0-k-7MvpW|77ztl6QICcKfF=(I1Pw@O|{u)gia@+Y*V!67WdW%>`iOSAnJmOE-E~ zZQkQUJ`V`i1`Jp1@7ch&EdfBNctMJ+!;LP*!z?@xhnGQE-47=!S44@9SY| z*1wN4?;cQGnC}{7gGc}3w^b;AuHuCswHoNal6~0s+<0TfQa&AL?2$_h9RXQALC{Os zjfLun=fX7Z`h3!fD1wOi%^bc87>mhDe;tZPxZ@HdqjEQXQ}el2QFVX4T^8AZ4f{pk zz$3!sD>nqetjMbDy{{kt$#8=22KGndC?hw@!_o;)3o?xRbHpMZn`aN)3V(0B{#{Um z$BCg%wVTu!!D;8Q`t0v>(^UOSZU&upnVb1ZLmPERx+um;xsG{` z+V4YxI!2lE(t5P7WN{P8XfAoHre3LR+FUY1&E-t?=W4LRB*q9%o_VX{xC+%2MaoUR zg)&rAreBZoEF(}-ft~Hgx!7BA;9p8Ilcd)3jY2p`MBFe8S`EYTfnwpgnBav37CM7^ z{9K=p@S=hje034t&Ka#!DIv~9PH)s^*`LI?9%)!L_of0X&Gr$V*#Ylnge5sLv*SV> zxA^2-X}iP1xFJ-optWeuvDgg7b@Y*lmem`!?7Duh#lj8zFMA^K&{m5j)wb&J;mYqx zr-O>L0)hK@UR!D3;-2v@FPg*_@TCFPp4#W&Bqn8FPMzg*jM_{~JP! zJ*=$l83052AOQHa@%Wuw&L@-1{Eu_rFy{b@=?=dUkci4-HpVSk!?@eQ4+<`@E*Q^y zEhO4%UXOq%pM;3+z&Bq&S7xlD8aopReF+8HZ1?DS(`hwmI(2{i@|Z_W${vk1uI$wBW2 z9vCf2?`HmKqNDgMb(9a2K0D|256lIifsMz-kJVy@lXeY%l`oIP(MSPox@?b~nvM$^ zm#efGppYX|AqfTscEqds(MO|RLqvB1d<&3@e*fW9mpY(Inq1K*;4z05((U!*17z=W z%{EwB{$HqCzrk4(*LC_p6_D<-+?GMmI?=KuWsxId1<0T%s-c=)Ks z$nFSzH2x)ZFBRVv5% zx=s&E=XX3jvwO?}+2aL31Y2By2}~d*le)i$(SDj|>*wW<2#P$jbEcxQ?57s_&B7!t zBW;vlLqA$ZIN1#qa@UClECW-&O73AuXZBJ%zu9OhwtE*lCtOK+Nw!WwJ9$&?%9tLB z))ZE4L{GV`c+E4Am@xeo1e&G9nFHsFk2I^4;~SVsBTiP(Is9@7@Zb|#w4iYxHy|XCFbWR1cdcN1Hm$P zKsrmBGv&>Z%wozro!d2oZt3c=?^G65oOoa0V?P!(=kq90q0VyN7^YU=>%3T)>@^*J z7$`z&Ib8uyjGW7@i3Nu{54$c#pUZmZ{>1~){A}TUgWWI7u?-^zyC+5=EVYa5dcQ~h z&xK}suPCZvykFlN`2@53VpWMb_Ji!1TQ-#@CyU3JA<8`i4ITUAY{r}{&oL6CgPm%M zh7?tD2<~d&SlQU3$I9sM=TAr69Ilu~5U{txOcoXm8j?{-5l<~Hr>EmR1^+KJ<;_y; zPj$RgY{FAEP69(jBAp@T4I?U(tk()Fgw$iqpBzk&E1m8{FnSpzP2BxecT;0$RQuq1 zMhWm2a?=@XG8@JKW_~L?ATTiKjCtt}(3MEqt9mVLv?_=R34!SQf4Ux>+zbK@1;YG% zV&!z{Sp4uGBD`cCv^CJaOWI~dwft*`__^_N(z>z-41|^#mv%ncrf)#h>7GC zVhm8B>jpW%n8%2l!%+j;_#Nqgi26d0Mz_Mr`ll7t3ra7K3Vq0(5{S-X$7BZrGi} z)$}?onbhbE0r~K-@+%NtuB|EcYy7J!I5>Q73yJjt%;aDk&%?CccL1L*)5=VJhxy(WW3VqK9LMwAH5cJ~HHa zx62N`w1K-ZQ9G3X`@@+OerqpSUm$m6?k=O*2FMn9H`#5!E&0Dt|Hr2rg=o80hvMt? zEHpbMJN~TKW=RYSt02TvuY&JofyWcO))8Ay>^?mVW$M=7@_l&&ARo zPRB)!`Kob~n2HHPKWXi5qLaFTooyFQ)>shA1y0Y&Tn9yowb<3g`>NMEox~>t6SC7{ zw|(Ndv-+-V;t7Ado8CKcv4F(yVNj*gGI|ig?R+9jY~OSRi0dk#*Mkp=3zG$0&J^n# z&WjJ*>$zH}M}^`fnPex5F5wtB{}+E73-omQsDVLaNcgJ$@bPL(n;eF6ez5EPWFH5a zXO@V_H%-vM^Z{BAC7pyO^JYPfeu}2;Tzv;>uQ&t*(ZCUayXKi#WuAj86q_Bo#3sfdpWvaYhV6~-Jk6i^k^ZPD zr=XAuil1u<#_mYCG+iWO2#5_A)Xd+X2oh*V@p}M&wA9;JroD3A2ORn_7RRTjoDW0o zbqCuea6)U1ZMOEF-#aGceIUdp<~#pE8p&)CjoHr@Y*ox(o@GE%zw;FMs2UlZiX_fb zyAby*w@?ZH{gtL&%X3Av_ge);C^FZ;7rWoM=@WVR95ah)ZH&(VE4qlHkIc8FxWGpu zy>q?+Gt&nxPaJ=o63geJojEnyXc`bPNSX?_mBrgK`^TtNBvItsj#a2t)>$ozb7>?F zdFVtiGb^2KceWm}HkhpfB5=``LjWL)z|nh9NQ^}&Zs<;kiJxw>Gh4E{(?P3@oEHHgwz zVD+wD&7E@HDTSAkjty(wmT=bAA+ZQ}<(24|`v+0M0K?O6on6QM5lv7v)24O8bZpZF z7bvzV!Pn|Fjm!HI>wmCmj~Ve|2;k%Onl;TND^VpG#D zUiHke`$0rPN(yW{HHU_Wq32oIPNG1=>XoM_B9@mcS3UUym+LTVZH>JY+PNB`Xr*wm zCQ(R3!V#7AcR*y7B2BZkIdR;~G?Tf4tL)5b2owKJPCUZp)@n6ko}XJ?;D7uhJH1Hz z`*e$?SGAXCXh1Z5Ke#PkF7ZrAGDk!6p6ul|Yni!#$>VO0*K2vgYzseAxC3nIEE5-m zv^E=`hp41Vlx%2(5x`{f{lE+=qUHydVXced@+p&{^v=$7J$ z8DgjH)WBdFL`iYTCaA_C3A_l_EXU(BUday3(eW(LjhTRx(ohUEn&(K@1rs2cQ@h;F zdn_*Xu=Hi+Ze{s;yp4bN6-=Q=78uL3O;p+a z8V~=4M@FWy8`)aQ&9d?R;f}awnZ50Lm`TiaB|qIt0=Ui>d_}l~CSYmj6c_RH)L`#U z&ha@Z=48$`ZddR!EEq-`O;&9m6O+J6HVQs$f3DPFIskr;DfeaD;9_F6U6LGR6bq++ z&@(y^@VXoJbhOtmB3e4VNqg0})AXTTde2Cn2rSG`TNQTs69|P56A${X{>eBRl8~_A zr^`OB5=VBR zbYJodsiCr^WY2sA4jmOeqE3+KMAj2&*RK(M()~(I_421IwmXQI>tz6ccXIN44cG*2 zMV;TY!pl)1N1fNT!bJ_l4>CGsk2E=YL3OE2=ko6frBj;GFEHY^U)0mo*H(-wQ?El0`7>R>)}>Go{M=R0DVm`6{L0#Im;k;`N^%4M+y z>X|5*c1I?4#)(KFs83g>VnqRG!;v@lT9fl z`lqx@?UmKHYL`fS?1c8C1@BduS+RQp6Y=cpDG=c#Lv%Ylt4Vt{lwtq0tQP=toEm#NtSo|gI=#4xT zKm5MrG9aa;g*zOH6)`tw1c&4?&7#-tlB{@uLd2%3P3yE#k}7fN z3~$w~lf|ypZasurrBo8!e5p)Hd77a+@OZ;ByX;+?aKtRum7Y%gC3Cf26=(al_q*o+ z>C?>aJp#){?x^kF3G5~<=MTgAHHxZ6!1+V3h@K`1!`Cv|mgl0{Syp^Z(-T2b^O_R+ z8LPM3V1Ada&%b3lpGFr5<3s@EDuVlg4^pRPTSrZ!aajd9YVTnQZESq7Ou>aX7#b^l zxit2cX_*hV;xf2^;A(8WT`BTowc{n|9^*q`<= zTwq)_mYSlP=wJ~0Ap*gfDed4u^vp?AZ$x!6l$?~Zm#841wsJ?}a=Su+71ND;y|_W( z#`#D_PT899!Q~cM$I+v2;U(ytsC*Q#p9Ti?Bx7dZ>j$LVv|~n1&H`cD?l#4s)l367 zG<#|sc$DUt@DK@*^%qDSkz@_Uo>q}J|_70SGt4*F?})QRQX^2+jlXtF@W83(!j2yJ0B zfv#{HLoSsS7s1_;A#U;ukK2`?d9AGJ(+%~hXflXIoWPF9CjyLs`^8du|0eZe!hCrl zc3YQu6eM}8H0JF=O?D@FX`657Q$vo}u&5V4oW#Z8yK`99@aJuQAjE=lysyh4+Cj|i zXDNs_8rOg^EAG70+$!eL!JtCVZ(14@(vhg)F?|A|T|vEQ73RO(!1bA!ouBYmVnLnJ zAw`vLH;&ja+u5@B598$!a}AEAk=4}-x!IFiKUk$eVn~>K#|so|+i+b26y{KJ8k?H(2NP~i$p{sAJtSv+#80248Uf@>ZM5x0tc1(|c1uhohIMo1=k zh>n*8o?xR?s=0c5w`9bj>Zp||H2T`QNAIQjtUta_b6K-FLR5+#6cn`Bg0_0ykd2Z~ zxXFf`k%FB<9fy%Qjl-Xi|Lu;H*ennVDZ$k7WG_CBL(CRu#cYy6zjv@E6)6mV9JxCs z5|bgNi*I;pJH=EnyW2CTgBv!|82eVq&h?87oL4B-``01W)^P42IJ!{l5$7xRWuE3qK51 zb!mAd8zl0+BMUXKzAhEaqVh=Yk1Zq3=3fFQGa1n+r+Ptd*PDZMcFokNrxp8oZsDiX z6<^IS_W}y?ol^CaZm|&5?8@1}`)`RUSPu_2LFK;pq9N+K~7`5$39>*xajD59BG; zxQ>=54!=rJidv^IziUTqbX+&9j#HwjqQ>E&EiCn~&4c1p{Q*Ar-9ocj%|=h|!C$rA zZ@3it?D_j}xCZvwE7s2=M5vf_$>~<-R%;EOkyG=bU;pBwoGHHDB`<$mR&q${o_m%u zY-KFl)EOVc1_cF4%42uGJa#2cF;q2J$u7Zv!14}0(#F~RL|u6QzkL%rYw%`B;8$w<{{XJ3ePSVP^Qu~+v(#; z&uW2quBgPL&edu(T94Rsj7Y;7%W|KsnTX3kZPn%kQ$=rr*@5J3Px9{ZB zRYy!%gVUU7719x3*h5&Z^<1d-WohoC?60-kX!u5ly>q{HR7XtPB3rZZa%Fu?rZ5if zIYW+c(T|E z(Po=g+xPYyd=c^cw2ehG{o$mb=-4l~S;^snKLHgJ#@h4t49)BNfur&BBn^8y$UJ*f z|Cop_NiJi0n>zJJvfFgn)cm60Ixa=7glOoc`<0#3z*ae|*^&KjN8vd7AIMq(Aok(t zR2vKO2hp>xEoc}o5E5x%gX`>=C)iL;qBB8H6$)6=~9`qBHJKTNMKLpu4BOJ6x|PIMWAW=_AX;g#%SyiCc>u{ zEuAVdM#L6hMbcshJ`)%?ps2HtS_?LgkG6Mxk(?hNQcPL#;;IgK4+~EfKC@X63%G2X z3DwC*tp_G5aObPzpJ`>r%pXLow2CQX$2*l17@Tv}kg|mxR6cjVzfNDXkN0b6EIv*D zq2MBuHvd;rQnFW!WiT|&u}z0z+M~Y)w0xl#c#6l=eXf zqHIsEN;qT72F61mp~vcVm{KPsCYbp_J0jX9C6>J^PcA{@=IUlz7R^1#&f*Qm{G76E z1I(R~9AV4HS+fiL)8-se-Gw+axE@Emmc~6S)(x~a9Dr)%H!y-Nq}M<*J0o+XBjg4$ zyE)!po>91bump=d#urUi4Y!DdKgSl;Z&XWJ(sKAiex1%_Mou#JZi%F%k2qI^AKvF_ zRnwIG7-Px2or_t)NtmpPGjE2Q3?ATDL-E>34!WnE;?jfs9A8^4+KEs~)G5cgK)I=A_d$g_{bgd}io}B*9*%fuIae-gr z%3Ql&=8`^|Iv;SJg8{RhGxBupk-s>ddF zLIgg)AU_;*mzu&mn!P3M?DL8V5vu_cXSidFo+PC<%tv=?X>uLo&^Z z2`#57!W?|Q!s4HK^TVBA6@@OOWAXio9AQP}Bf2d=4X#``#I8?Z+#YiI4%=<6d#-_( z;kw=BpI(6x=aabq-M@&hj(-;Gy8&F0Vf4ar6F3 zI$H~jJAWB3N>Y-Ojdk2&;;KIijE}Ab!ftzYBEGx-aua$SSl@{gg zwu|g3ijwn^ptkCJ!@QZhi8@+g^t`xn{ax;3p)w?Q`=uSM#cFQ+(Iyj@hFn?|0NW%T zG3rSgda#T1i6&Lgd&Tr+mTR|Gd>t+W=>lwH(c>qR9cUhA+py0K zo4vh}s1V1uSlD^Y`J*vMPoqb2VhTO^V@6pItPh$cyw4C_`I86?3F#hYd{8DhX|}jk zLR!?Ck2oCVj6houxuhv!tuu<Zknr z1owKn3fHmk6uaROnLc4FY+CqVk4PhpAX9hdZT8v2U;f}t@`m7t=lIm>zo)1-*y({; zMmD&z)xVY03?q%-eyE8CUmOEqbP;r1%K=*99TTW`$)ORG4*kO#Mh8VYa~*0;S*Fyq z#uwJeXQzVMUI*0*6tvpC7#iCyl>GV`k+_ivu6BbvC24GtG=Bk8P~S#y&uMI++;-i6 z35gA0&{44>mq*~5A|gr!n2`~%KXPIyH86nf`tN_08YbUkysG73E$-lLZAPX9?=G@A^+y^JughRY@i* z#NoXVqoLQEZa-*w;0ncK7Svg%ro7F}9AgRj;u02~*6bJ1vqs6L9$ic;|Hb9=WzTnN zFaqsR)&+CBtv0C$e9byRW$|(pVtM&P3rs@(zhi^gEFRD` zAIlgcP(M3*@oVX>3P4y+FKdf4T&LRLc*S5T5hCJO+tCHaQtN1lN*qsUYjvILG;p^l zmB8rsNC#yVqGs+jHWNt)_Qy+p)*;sJipE!n64LNna(d{?$>sSMq;)ZxRm@U zF|bwIj$AH!7~^KCklNXVi8(o#X-X%9-?t;^JcBRu3&Q9+qZt+V^s}&` zGiL9l^v__)ZjxeN$47aJ5NG1v8<0U#5KxKkzw`yrq*A$ZPoclkH@yPOx#+ApJ??%0 zMGiV~_DwLaweU_sQX2_OD(^{HhlI9-f`t-+IjhIUVoA=*$8yEGvrTdqOjF)0GD%*p z-c&#u^J@UxoXz#Z=NQBeQ?!Qp0~FNMh}yMwAF)$!jrSOPB$g<4l<*bH&;RsS*i&tP zC*udy^Ued*$BPIedVXjiA&YwAN`-;-1hSw;Q=>YwEv2wO?fWOIP_jy31xglJg8&FzLYq_jxq18ngBX8|A)&m!qJR}XT( zZ$Y*^PqP=m9HuvZIY59DH_IRe1&j4ttBmf^iT#q&_oeV-4Qcx2aYk}4iriJR z5u8vJGD^nEiY5|**bn&Bc_HxV=ZFNpUJ8Tm(}_xD5KA~D*K`?3i+?}-=GK7!Je$k= zun@|IMY~C7@1LFolT!!{MkHWmDT1;WGV=YlNbPj$hVbf#SF_t`5QDeHv`ODKDPz`9 zGLLiQhSnVRUQX7)F=6>xbdE1fn@l4dgzt7cueV|GYo|#ni!ou=ErDA0vEzDjTnG@i z%~$KV)1L@6k~xvs^rPC=>_-G?rLBu2rFQiF5fTD4bfecCdZGQ>o1|O4h|UxW{Rd*S zirN>xi#9`OP&Uz14y1?%w|JE_^ugqGpvl3*nqP;IK7Z#eJOJFksOLMtP)|vrT-h_^ z6dq7AdJxeMh6rX`mXx~;`*#2KZIjgkEua7MVmC6{b{$fAS6Rt8Y7#3z?&<5-Z!WqC z(OEWcqqy#wz$DX^zKV=SKq`ILq`M|B##5GwkBtiCMkJLmEJDF?Arq*4x;#y}kF)UM z1WvPla{D_ITlB1dw8N40w<+B6Com53Qf(V*ACnJ@OMpk@M8FHP(z}$^u-7oFAou9J zWye2P^q(9H72a!7`WmUkrKB(v8RWR**-tz5WN_+(7Oa2p5hiJ<8t%95!otj6EiWcs z3{WAhS|4oK)C?El*^5eAmc==RaAYR67*kRSR8F-@Hjk)e9EC`6{NSb-*+V>`_-2J6 z5NSo%D3uOuwaa3BV9tb!Z(K^<)XJF3=D3Lpk`SV}71p+9;VvB_?$gTmdmbG@*i_U~ zB0cDGC=V|0&~&WHYh0Ik+hbysW$j6a@zI}3DVTJ@JY+FYJdLbLRy;63J zi9`G?&L}&m7QWvfk-##Q#f3)Xe@DEq**c1$q5DU%`OohT_5vs~(nAq7Er^e$&2^Mn zByuW&w7>zd;V{cZT(p`FNPxsZqwl-#7nOM5r+pYR8&|;kzZj0L^=!t_;O|ZbQ%-W|p$ze@Xc)>J>5AT*BI~ z9vczo(9N>88*NJP5En0w#l^Nl&OzJs`ZIA@r?9tR-Nee-mFLvVY5}12yygj9CoEDL zgmSUZsZl^dMvOr`>cTD??`z(8tS(Ino%Gi}HrWhbY5yJ-uPPY?Ix48!EmqIYVa9lk znTuI!g))~hr7fYD*Lf>Bl0Cnwif+Kif$>rVvsQT242}ahSJ7lZ+nnaz9<=fhY1~CY zgq}vf)HY!%np-7=7EkMb=Pf3w{WpJ;b#hc(-j8_3d@RIdvobezYP+-2F}?pPrhGOg zl+p2dgVZt`l}aG>#fs6!!=*Uw4@CdBb8fTV@(|_3U#s(s_Uh#cOdP$vRC0*ue^RJ& zKm$?Qjxu!KYP$yQxJ%a=?+JP&X`y?>72a@n|IlXbCE85rOza9 zP09Z;u%6Ur<@`IKNKRkI7m5?Q*<;V=hpGwgMsD9VygC`IH@H@+Ry2H9GrR}q&>ET4Vdpo9*6*>v z00O)J($w-kGQ%i4mIlK+I=DTP)YJ`k25y;dOMi`3{}W05`kZf6um}nPA-}M<7N>qvtdCuDvx4S@a883v2;u^#E@5Uj;lrK<7^w1AOHkY97o|oQm}L z!D)?qcy-I=l{JvTuTV;==3NCV#gf?q50+T~?hUIajUOzQ|CNYNuNh|h{oi2s^Cw~; zWQ!2Av&I#o7(>*F&oC5CT(@&Ct`-Ir%~t4EHy3mZc%L1Du&U`#Zuj%w6&ECQZ*B}< ze!HI@Z{?Kt=Sk&jS;+{Ur2iob;I-5k#GdB*;_9^iQXB0zc^bqrnCb(lfU#-q@HH(b1Xg1sdiWS&Nit`Q`~F|L8O|afxYYgn4y&+^UTI*U(q`Z>qG+3cKplv2MOHw=x9Jr{PB zg}n6~oCA$o2&~$O!^$bRb0X7y>^T0u0fn$gk%_&O0Vs{Z)uz2E5#B|4OwJ^kwM;`0 z;@fc}u8HXIIHXe~+M{%=(G^|Cdq)z;sz{@}QKg^4hI#WT;6vZm)~NJLWC(( zU&`E6s;SAwGgaqxm;}ECDipQUqVnc`wD-vho2ENTT5{g>e%E5V+%UP|#fATHEumwo zD;F}oZ#1kS@pdkq9cqPJKB}P}uAvv$tM~teG6{(uD7qZ&+kYHggTea2?p>nUZ9f5j8NV-!EGzkIU<)-<7d;jMLV&u7uBDFA6{;^**kLrpxKFIS^( z+r-@c^#ZgR5zj%mzaG6D4>4xT>>P+#$ML-#?atF;h9j}^EoKQxP_HWZDy^q7FtDFW z+8CchnlD$RuprN)ZRUH6al1q3mF7_!N;zse3bfyP-f*>1+W{jkI22SRo}J2NGu*w@ zxaA@@PIsr9m{>Bz@QedzH6ocd!*2FFp|; zuQqVDJ6b|X+Yr0o5&E)QhbwLogw4%0T&``K&Lu6kalbwG_*E-2eH;`p8(Mz}d*bvm zf_FncCY!**vC*w(Xt!E#YpKf(i?L8pXtUP?IbK{*c@stbKRby{O+SH`8p>pLKx0+U zKXieg%`mxot#Y(Xj3a6`$v)^8GTRQH{}OcRhJ*$K(7ll5jQfE6ytKNU*MRVJ@(^3d za2#J|nJaBZg zdf%T2`vM@~;^OC(U@fQEO%_nZT7#E;XK_%?l^_XZWgqsKIRHfgvNkGL0_T8vYvC_z zs4dr3zPdkNj`Gxq4%DoX@mQVb!5B98ccR9Fa5`|IoiSQRL8dGlOfGAdhGqGV>L#qb zCUpAE>Gm#VxmEXPbJ4Lzdv`2eJsnF*Nd>>8p^940sa>(D z{d0Fl>g&dqr{=5JdIrrKaX%;ZEU+l|Op7+T==tPOd&%rRKYlaYCB%Ra{#isU?=u9$eT7*l|&n|u}r*fH&_CXQucB8$|+6Zq?a zlv**Yp@ z1?%LN z8q(+ouk!_R66PWvDp`4YE>s&YWQ|(w;OJZ8aK6c-2{DDrwoxG6M8d zC!mq@CG$KANq^aZBrx?1&n@rJ$na9=iL0JU^+?7L-yivi$EHUwIbpdNEgO z?PI0OuhlilZPptq)+xD4Lr1QqK?N#o$%V6JpzA)XE+P>FK1*_sayZkr)ig_0%Y3Bp zpEHtsthC79OrtnYH$I2@Gt$8@D}-gYzRlO%%X1^hNs*Z_Rc3G_YLaIpD(a%E&?AYn ztRfM+ZJLL=dX+BmPDkVU*I5jpUI*flkia)8vi@*{n2uMKW^2e>n7?$r`|f6~&ioqp z+-_h1hd4Kb;IdrM!SV+x8d{Or>dkhi@Bd;WY6$;B>vjG8OMAB4+iR)2a5#gkmDtTu z9b%5dVSh5o|M0lDmf5KBwX`sCJJjEV;heeC;7H5=Zrd<*2@?2L*i=-NuOTz|DohC; zS>txgPBXvAQzC^6jAR(((wiupBu1ENba4M{P3b(Wsws3E4sl&BvFek}9Q@lub4tG8 zH84Z*va0^Jt0nZSp@;xZ_R9J*MCI==`@+61tQO6aQqy`iwn!Bb}U$^m{DI~ zkdjm(p7>&7!8ySri?vFF5T~{vXc4;r1zOkZT(qdk#$-)-0=0(Kb?s+;0H4FDe0?}a z_bg9?TiGyw_c={hBRJ4yN7XNWGIkcs?eQIAeFU(GdG5FT%Gvz(H@vZgIUJHJZJxDC zwVqGy7LMaXGIIcF`a^C~o+KkMU||VwYP}H&;DTWs`aPa0{pF}>2Hies!r~MMz@w;a z2yWIW2FiNAm)rz%bZ0^B_A7XYD_+kl%(H#x#PVc@f9tScZPj15XOS`0N_tLj9}n&Y zA~!q9w)d*@_J8@Hi!`&7Qb@~N1X6y_SQ92LzcfA{O8N`vh`2sVG9~q2c6y8(-Kb^1I7Fl3?1>P|M>hX=l$LSF!MLN zq>b(h<^Ls9PdRj#a6y~jDzzL%#47Z~5sMVF#1h-rPRGOF<&H3Nb@^ z%8L&5gIRZc)B3U~9A_dJty|cgx5!%7D)7jh20?$Ro95kw`MvUackILco{uV=(+RT{mH;n3{=W=r zPC$c{5srN@WHV{gLiO`ra89DY!f6G3b*z?4%CgQF1v~N+YhB+Ir{SS&8z9==I$s?LXPUU;@CeYDByT$usb1-V~55)V)isoGD1op`BqOp7WQ@~ zxmZyOE?+e@9SygR>JdNpP9g^)I6F2e56Y{3V%tyW_X^j`<2hMa3`N|4DQQU$?Kypw z9j1nlcP(QtKlDD$_!igKOJvO*Mg5LZTsZM+!x!<|yKOp~f(BmY(LSr0BYC-9TJ7SU zKx|d(%k46!?TExhca+=PkF>jnbBcVFDbg0-E9+-q+I7GpK{Ue2xT6nyC)g*(fBUF> z@y6)AbLh@}9C>h|*RXYw6^nhJ9b(tJn&rl{cyhqAQSDt~&>4fs^BF)9TRkhUR(HQV zZpFQIg%0QYkj<7)x+II(dLNHNryRKaZNrWDZtnul?ZMujTaq)VJ06zYA1!y(*OL60fE|bGN?%oz^M)3o@g#Bd}g5+)mydc{|ow?;q?9q z;t81{KI?v}U4nWI2WF3Y^vOEn3nG)jPW!dSThZICK9}oiBqzREeKB;Kd>o;+d+G!` zgl2>1*y!1|`K4wPxM%oQz6ZvP7v{qxNr!d_I_<3eMWOS|#kdr{JJ>+N4?I9!kQTS9 zR>|bjiQ#9u{~e%r65GH7h3={Q@a=kLJ2SW*E0%!m*IDqLf9GWt^+SKBVzIUQ8}h{a zPQe6zr<-c3;4hR+2dLa|FXLZ>%^gp$W##3u zU;4yfUon-Rp565u4O~8MH2oPL8v=rFV>S}ao*&l4$F9C%mM_0J!~99+p_hw^Rjda* zGG8jKmW z6?YbF>YqEi6IJ~YB~)TRB=7@G{sElHOCO?&#Iqhu2!^It2EvSVOuxiB+e?F%*==9+Xyu27@#UZ1`7Fwv>4c4DX(yjS3 zGsl&}#0u95+*f&^;Zsto##U-5i?QDo^2n9rO#f2#P#atk13~xLTE?AI2lASK#qYIM ziF{ToMi+IG+PUs}cD9d!w66oBkGhNDdfsh+iQd1(DJ&LnyX^$AN{~iet<_x5QXb4{^62LN$U$h z4b3tPc936hsVxhQdj>XD`gDBE%onneCr}2?$Brv2Tie^d2^KM^^Tc4NtcXUQw1izL zmA%`MOpvN)m3Gnx064FlT-f{EuYc4B7`i&>4Gq)QHE^O>hnM;&4QdSP zy5)Q?vZnmoz^Z2O=Qp>!Jm`+sI<+^J@*35`WX@e0)n3n z7@W3Z6KSMedz-x5Fn7V=90vheI|4YG{Fwor`CtIjqysaYft;o%x4 z$WT_3Os;M(3tRhKcQzdpV{h?)?1#eT#c%fgqF|OMk05S2swipw;&rwtE$n?-Bz)Ex z%b=D8DqLArNIUu--gJs4^k5*%_P3d%d3tN;PU%Q2L)a0K4y!NGX|KYwQ`09rUnG?#3eOC|&l1~6Mo%VFAFd+-- zA+$>n`l)!boviod1D>-3AQ&_0*;3d}_8Ck8mxO!3&t%EYB#`m5VKOnYoN?&a;5U=< zmUG*twfG6z;;iL1B9f6me@YIs?3Vy^uCa6b#5V-7HQ99LwKFA`mJfyK6GyQ97P46Q%-c4-XI3eF%MKY;)4gw{Udva_%Q9eHaVcdA;UTrlXmk?3}2>Jj&z0 z#uQMl-Aelb6z;aPt4dQb83%A}X@CFO!+rn20J5y&si&*N-pVscuV3+eO+|gi?dj@3 z{HIosS30YHzmRJuAs!B*$KI6valo$eNU`UU*dBx{YJw)qcH6}s?ehroHH+DTw6qjN zaT;GJ_*JB|3}aKjnxLoHlLql}v(UmKLM6U%%C-*kTj#Cga(4k>-HjhLQa8goazv*y z3p_d^9UWUvwqZG4YK!fDK2(bDX47j8iLk-o_73RZI7%5Qoi|sc-Mq(g{3DjF0BkHYBq| ze7WSdrX)wQ6>8AC~WITH!0Q$vx(!R#JpTwTF zJb9M~D^U6ezY~@(A^Q}Nr=uT?G>lv~L6$pv$a#ZMW+5etXGtz{P2sU4PeA(kIhN4w zcM3C)-yhqw0@t=Seb!9Q7^9Xlbum+0*(23jVD5a^v`UZ_$^(0T+q z>BPA5-@iHlLll!URM_Pls(emzgNBCoUew;fVb?&-8=|ZGDaAwe@Q}nu{K!X7K*7OK z_UVp%$4>x@rzF3PxOZ%f%yah0)7qv+3#U9FkGn8Ams@~l(Pm;MLXCfFrc`MZ2y$f< z7{!(yTL+W;TVUQB5p^D6g$DwhoJr;4MAu8sDmDW03ita$BkD7Z?-V#*>T9)37@Pn)mY;5@8?PP3Ahi<_BM_?Iw8W`wr^Z(#{vfK9Fr8Dj(uF-Q$mqD-s7i6$?tKXt!3yR7-s9*WojxeJW zZ_4N}T)21U*5}>$aR}wHeE2n*9LvL`<2vq%0Sclx7j+8w9sweVa-n(Q&#NuCjV$RD zEQ^~6E6e)8q&HgkYn9MT?+I4Lo=7G`JrGeGId>XOQGXjMKb=U%Z0-2?xVE?Pl%Yi}+jmeT+*pc*%4C-pQ3&CbpGb_|`nQa^5g z%%B=Dm)`698%;KI^r6fG zUsULEMJT0-NC1;_ZZ*+qlM4L&T(Ux;Ba&4l};z$5h%_0sOTVw%TuiDxuT z#@)u>%q3)T9#Z-e#*D<*s=p6gu1BJw5=%ieuWs{AKU9sGx< zCoNk&BvQRFg>sByN4&FE@yW10x!HHpJ@=GE7 z%59!%Khcgy-c(seXVcN3QmxLNud*PJ{8M3g!pqB@ctK1I%H`b^qcaEXnIk)|E%?W_ zmsTCM4&u9<{`q+?E}!2HTE_Y+`xCq8@-?FVcP?>mzZ3CDIDwBvGNY?694K*9-QW-A zyaZeoe+YV*qYOHo=9_%w>#>b^ai42ghwfQb-wV6ucY|IDQ|Q@WdJTSTgATcA{{e5S}nkrgK_8bHFXyYGl z@T37HTFWZ#oA|SK{5tcvwigCx$aDMwkje0M<3fIu;4r6r(QNDS?(u+}>M<~7AkBQG z&JFFVd_z{QoC6Jbv^|7slSBepMoWuZ?gur4k(QRUj*gCz#3c584gfzH(fB91FuQ{o z(;%KQ2`_kJ?2=!C_-m?rnG1Xq zO|d5y+}fqIz~~@Mf<=eTv?y5(NYt_%T!NtW{qu+&^hfvwqWSs1ZYt!y6{_ke{cD8k} zZJnpr_DKoC1a-B^@RgoQ%PiktM|@8^`R&xyG));RdU2Z6NCp@ld9!Muk&Q=BPk&({ zkr-T_@4(RK)%|@77`FPWF4-s3uBYD1?ShRjgXn5+Pg06OEG1n+3Ch$!MeGu<&E4i$ z8FlTrf7O*gJiHSx0TK{rww?&wlTk+az1dM|7^i5_&halyHczLsN$!qHnJ+EcAZCND@j!L35Az5X{b>&Y(;*$v-qVr`^p=6N6|z zF<1hMz3acH9-(7WLA7lUsbEI)-%OPv;dc@#RXD@W$C|WDkqkp{P2%o!XJGe}j)YSU zHS|3WX=D$+pCoLZdSVp!&3JzPYV^LAVIUC2`v}_-bGNNCV7w8~)ZFBXNAU$CVZmamOxX7jFPWXfn22;B$>r;NkSK$_?X)}| z7FDvyv6X;n)K4JHwnQWRI?8q#GFzTWN8=_2bbM-Or40z5NI(;l-~{BH>4Pz;-NQ4R zW;yacox3I&$WyTYF@<|d0+Rjtl%+mYBX#Vb6xc*P}5HYpOju zaCIDMjgbYX?8h&rk_ECLGp*bCkov0XNFk%+jQbY1L`l8H){bIK!5^r*lUb8NDE_1dpGRN0S1`p-BBXE=f` z1ORybSFO7!RcZiCf3%yUvU4E2q@&PwIGz;i!Zfwm$WOAyZcWIm?gm ztWzGw@nT&7wHOZ{pN;r6|Hp6)9;Id+6&-p15PF<-Vbh2$CCs+3z$%#C|5*HOXc96l zM^P$#);^Z=cvQh&Ddz$XTpx`)1uTP&*OjG>emC|gu}Hi7elp*h+$hKcF+VI4#(T^f zkySLwsxX{VI?M-v9vi5WGR@zqyE@l1Z-LRLUHc}dKeb68f*|!|Q1U92QfeZvaH0=8$=U(C zSr5Kng|&n}!^1tIjJQx4L_hRY@aKs>dFjHqFajrCpm5%69X-Wn(wVUqu&t7|XU0}C z?upz6`GrA4%aCbx2Bb08GH_ z)PBlwZh(v6g~KK|pa#OZAU&@~2|U15cYELj>y-dkMD{0F$sITz#H$R(g5Tj9}o8rzF>2P$+k#<(MbK{cmW>JNcSv4b2? zs5weCjY=#yb$Pxk#Z*b>+ch{12&2G7(UGZ(m+*#;CV+2kcALyCFV@#jNy?L?<&wKT z!i$4O?K0~4Qg{O3R>V&hy2Ai8k)!mU9*P`uA+Q|{Wrrf4$w+f8t&al2*82VY4%EXt zzoD<}2;qQKC9Wi@@Ey09#3FTP08L)a^Tu;-X-TTav1a1c4~88(F>3LXJLJb5 z)vM{t5xcv)yIo&Dya)urAVNfVUL$L5$oyUntp^ut>fI7`UgbXCJNc}-It~D5qq!C? zLZQITWc~hoYVmzpla6ZB;kl_CuWHe&omroDHoPda0QC9%73!s+3MDS7^ws|4ev+jN z5mnks=^d^Mz*C`|`vZ>v_bnatlizU(WL{{FM^3IHwk9U>L`Z5NCxnVP-Z?zvhxRGs z*2}>`7|0BrzCBrH=HSRb?Ya|-Nk|^Xkb19A?aFtDz~34K6s1#Q)gnxDB4({Xy;<(l z8vsNTfk6j1LiToc)M0C8>l-XCVR0-|=m(Zf0V1EPjYm2p0HS(=l`PP?&_F@K*m5Wv zZXkeXold@@?fNx2nLbJ;s}|Vh&%Yib*|-}GU(=a%Q#HW&IRAsFEn&T$3vzg3gzJ77vc2a)Vilm}tDv z&c>E5)Ig2Qin_n=!^0<55fl{k&{-OhElwIGJNg4r2`lgy!ZYlphjwf|iUW@hC^6Wc zI0^}QvBow#uS;(KiZ2DW7+SWrOp;{rFP|HETiowlT-4v6T>%lr zZgEtvyi~vWz~{Rsi!bkxsyitVnr?3+&EJy%N%OH2m{ z`xVzJ{6a8Lm{^{_cX{}HBl783Wmj+tv)a4^f()A9NvU8w)6k`saBTFeQ%oOR31i2&od`M;Wg%y1HP{3*D-??nQ%17}1!!@wiIMk&JYOzEtsqEh^A24|_k z1~VKEQA@%s#8GQO#N#t=}6KVH6)IR+bF}%itgZRHkBV^G0 zYdTJ*v?YVkyR7HuXQm@Pf?Q$m_~(~reXd`#N`D};B=s%7>B=9+`PWL!i3faESznEF z4^!p|tgswKffgF>Cn^C!M?QUXVPOE z;8WO$nHxq4U*;mJ$&vSsk^j@ZOSiuBYI9uG_bO(3!SL`feTRw~#6oXq#{mz~f-;2o zmr|wlgB!3ZWAGbPmaq*5eK}q7^ua(9OjnaP$3}pjquk9aT;}%6xFvgp>{l|$2j7QA z$dwA~_2CI*96NhHO_xg^m1SNR_Wj=>0CqGhd{({Dt}QTI-*@zdTF^m;KvuftSGGgkqom zG<&$2th999nKe;=J)b9_N4a|uVfX28eCpYAxT zE#kD0)$*~-U`q7jE*L=P78D2QXiGn>xYv>P#&d^%;ya!=l2yXPMb9#|zw7_>D9R%V zK%6uT3`n|w4m#Pig4!e%hqGGy+QM8DgbaosT`{g|8XSqUjr-8{;qCd|crJH(K*@gf zC?Abe)X&0SzN4^8t;EQ;q_j;NA-g*{#ooG|17iuyteLc2qu@!V9=#MfE)Yl+6^mtx zTOevA6V*~>ms|a2y}=t32!+eO1_JqFtS35ExNiM6N83>9CvGh*m6=%B%Ks^nygYm} zuu5s_ii#*>1asiON(k%eB4`w!2knSgS0fkp%Se6J9s(YLo?Jz3b9~<0>J=W=!8&+d zwIJJ!#^z^NM&{A>zDjodnO=YU)K+YD+Cud^3b=4~@(eeT=72+qL zW2QGS1Z^a$gZF!9IbzG_#4mUG zub{%sK5bo1RZ_lB1psaC@un_FC5aPz4@{N%*msC|a9KL0yoD7m9fo=kvnI`GqcipQ zn7Y#y=n=8~=l3HgX`vyj`_+7XeO+5$ln1Ut&(uz3^1bf?Idx3=vq_Isb&my;!y?0a z=U8Jl1doBJIH{8; z{0^vNV&Y4zk@SUVVEJ)BzK~0E&Q2Anf>Jzu^R1M1cE_KXVEJ3h^{N{PFV*{sIq zR}?TyGMG_0@Y-YOa_H!)|8%aGz0CD`5d4_tfp#6?fp`WDB*xM4)2EWZl(l;!5P;?n z2^W73{Ve@i&@CncF|lLZkZ<-`cu#3!ktcqosU?G~)vJx0{Nq}VDn8u=n&eI+)DM_0 zX~HP75M8gl@0IDaE^KTyD=JOy`xt%zcFS%a)8@yYP(kupQ>>$PYAVvO#=Gzpk$FUs zpl1Q_EE;K=BsMm7j+9pWvc$nB&7G}hd(tjCw~UrgChpNZJahb-rw><+Zo4ReNv>ch z_N=&M2PCzg-@D=8$=O6Jh{}4l8(ffwaIrZ08C-CD{WUyf`X$tA9W7* zKg|t2jCMa!z!Fr=F~_8H;u!~?W*cTzR{e~LCl{9OxH~Qb=u&>?!{U3|Xf$k|P3Sug zJXxY%v3JMs5H26UFn&KxFlM~%srnnNj&y}Q&Z!by{;ionJ`a&|Szt{V$K{QLKBHCBQr zbGKI`5qIj;CsgG$CivCGctktV<32qMQdJn*#9mDVmpC9MLv4HwWi9wl9H-hk zp`L7Y(G3%bp{zfBgkF7yv=wo>74e8rn=pVE**wqpxLX&bFl(*cbRcitXebKvd1Udw zD=$QAS{*!6YzFn`Tsn#p?&IUN9gq1l911$B1-3bA@p?MZ$Wfi5H+6W7)Se$?&%f!Q z3RIQg#zq^H=bSB=?6B$LUh8aYR`b3@i;bpKl=mF=;n@MYE3Uwxiz3OWqun+S_jMLE9b;^Pkj_k~ zM-g$+rQ&RDGcz$oef|tv`9rc>w#`@8*oeanS?h#^COZawuvMg>v-+z8UOI=VDyG;y zEs#5GUc`4ZUrFE$P3@ebQ2;@UQ%(?nZEd7rzSXMkm1xK}4*9>+gHGBOZyB=_!xB

?}kJuGcCs-zW5)u+tsh2;Gv=KjMj!@$#Fo}>j#MBWq6h{cPFGz+l zi3T!FK747gp5A?D-*`MQ$BQydfI+meyF29X`*&S;`Ze&0nU^h2G-S6x zA%ntji%O~7>rt`pDR}L7&`;zgcZ9zD?>q`%X>5XOjZbsF&>?I3BVtk%sfin_0b^Hq zgoJ9*(a0jbi_61fMX<9lusI{Hhf5_-9xr~l- zR7X>}IYneE@36kMsi^K2?&(C|&;8|5*SBLnNt6f4$iSN>^!4@e(73}5F%Ig-N8&D7 z@%086E(~W?8L&q)F}2DQvKuAd-deR^O$ZC3l&6$vf0#f2g21}~;jMumA={%L9v*hz z!O)mirM|7Ba?ASnjwwI|sfHTZ5Rd|Tn5?Szwf;qItA_c z6)l$}qu)p4GD3Euad#}P3t2wIl#p66SXo-CyD6f`iFsC?lNM!>_2FU*-pPsM2-2h5 z<_)vH^x8@Ieo1d(H1579Cl~bl;^yw&#?ai^`T3t7Gy@9T`snXfbFEDA0+t;=L_AGONY6gfG6d?NKNp&0e7kLqOprW%qt*fgrCj)hTMU0pDhsBS`fFu8~n zU?_cTD!Ic0{R3tqy!dJz-Z#wsTc5bw{rfR8GZx~wZaj`vC;(s2?B(*+2y^dfT!T3g zUnX(kxvkcEM`s;(xNZ)YA?gLLb2|Pj}Plra?nrCIBH~jA^5~v=qJRSC&&#ISGQ&^|(>3{VYw|G(|8Kn%34SM`^XWv`&kBnJ31<9Z- zKZJrppi}@!YEMs2<|m*(`fDU5kqt@2gl9`tBUgvr;bFNDVChSflgpFd^6&A`Dn=U8 z)2K&*3b9dS9>!Hvjw=c-m>q3N9v&VSDY5rIf-pS8CX4py|Wcw6a?((Spo~k&X;+py|D6xss$FDaj%8`}c`}bL;{8mnehK z9)hn{UzwOJI-!6U^c^BX!}r-TYh)a&=5i1S@C3DPlYqsGv{La?kN*@jyZ^4y@(GsC z$7uQB?EHOkoTO(0lR^Mc!yg)7zMps~?Xih$B9K?Q{Fz$erDGdnr7rO}G@Se4sqTP= zL<7G2@0YK@(c{Auc;c*h4-N-Wox8_nQ>5I+?3smyMSmV{tE7TMvJr!hNq}x7V0(in z^2X=A)|hxqSX%~>My)OvcwaAoyvts-&?LS9+8ToAF!0?2q99IBU#hL6Lm#Nhy{kBg zUV>LycK^laloi%j+T2aghv<#iPAe=@az4e4hw#VuL4Ao{gMd_a z0;3Wj9cJd}M)^FROzh))-q+~3QlOWuD#r!+K_O%HLVjFoOD-d8D*G4j<<}eu zrW$5jgwC#ZXigCsoCe=Qet*Bz+A_peOi^{KUEnPQD!yVEBc>DzG6@a+%6}Zybv6CB z_Jw^xDfCqGKjlJ5^Yrp4nqqFcA`ziQML}+aJULG~Rt>CqRYkSgWlE}NiuFX!DDz^; zj%BTkq`?3fJShU{-Ev<(!NdOk);1o4)@=CN1}IO_3DW z^#teUM8w5MGXw_**K&s)Bg?_0YFFD~GM}BCZ=Z~Z=*D%1p&^mU9y%M?j@3v3vlZ6t*dC!x#(af53{y z)A@0};_70H#vDtq#7^QGgC&{uQrX0Q%qtnyj*V8yP=WsZMP(0;4YaLCaJ#4 z7EhB2Myf!C4tD>_Z{q<8Vb}s}_XRrCk6ypqs}dIRhntJ zYs;2MN|$Gy<`mIq&-i%xe|V#XVZb1`MfM-f&l7YbZ(zeOM&(eGglst+SXPFTD&=1L znT|#cEF(aq>87;We*nBcw>VN#zvyb}RT3PM{@j+(KIlC>OnWHTD`-I6#!{EWcm2>? zH2(55go!j_P%?N@|s{8H#bXo}=xNl{VG9X}EQjXgQEow}7NKhTcU2ovpH zu?;{NA)ooau)J`Eb!Jr|PcqXW;-*gRR<^E@Me!*fY{K70<)#|;-_3{Pwl!&1{Jbgs zU12W};H72kaDj6~BIvnl;Ji^U(DwZs1`Nz;RNfLI%f&~l1eO*{E6T7PPdF^Td7q;a z!^Sbwv3BAf|Npxq_pc=V-yz!j2Gfp|Bf6QVYzPG$3NosoS}C*7{|~>D7zF?T literal 0 HcmV?d00001 diff --git a/_site/assets/HNN/equation1.png b/_site/assets/HNN/equation1.png new file mode 100644 index 0000000000000000000000000000000000000000..e0181f93ccad978ad9dde9a0d7815b4f0d5ccda3 GIT binary patch literal 11990 zcmb`NbyOQ$ySRZC_h7|IfC9y#Xpj;WRFKf-3?3w-Uckfw~m1mxLBUO~;@NuYcP*70tpUXqlP*6~_k=qBX zhsYyQ5welU-9t+$B`Fk?idftm7&@|zW}+YmMY#`;?QcQ0Fx{Rj$zU#HP>|4b5nIT) zprCL{JcmkYcuntS?0e@5Pj)*tO_G0P8ATmMO(-<}Kp+Mt6H~_gQBAh9y4(5_$0MB} zy~MbC6Gf8DP7%{b@=*vJ{aqaIR9(OokrNSCHe*Jvq#PtD?vMyOP)PG_M$FyK!W1^c zK5=|hVsDor>q>70#fuda<}Vea$d9-p1}yic1oqG)h%M3Q1N>|7QVa- zlt{2ysv62`jfRR**_a6QBow+a^&SDhJp(#QV*z*2dDara7YQ&fs0|Tc1Uv@Z&QeQD70#CPgO^qsXmmgJFMrR4)I|Es z*B?rMlnwS?4%eJXtes^%=kq}8C5|`2UQ9T;eB6?Qf?{UkUGq@Pz1oj%XBzy;K@kog zBdr^lDg=_*qQq{Eud8V;wyZ8e@+5eRQ+Oxa4kalq2Mw^B3j;rRynCBL6?TN6d7nc= zZRB3GrPb>zp_(dr@>SX){S&QrOh%LCosN4C6KD3fhq-08OJunT%0mb zKS@0!VL;PDgC2Z>_Vx$+$W_Em^gEvpf65F!wwAffF*mZ!Udv~C)}!D zgMrtow4t?-czcx;tobCY@mJ@rJ9%+YJue~*sSUaeNXO$zBRz{BR3r^GIj<458JQeg ziVgu%9~ZUC#RJ<08A7*|z(0rtd3>Z4Ha-`Iyx@l)%YZze$_Od+-Kv6%f}6IoqIw6dF}`D=aBcMau3G;nUQyX;!kU%?ZZtsSjJXHUL~d8JkN_S0@;( zzxcFQ(5#!q23JpAqiS6p-ri#TNd)VasYkyw)A*#0?y;!VH|Y-S=Tzz6L!IkSB6vNz zFew)fKQdS7e zhzD^b<~S%KZt-ZD6*CvJ^%iH$RyhW2+h9I=AjCt(h>8hV?fJJxp z>6fyfk#Cry;_fMmOGv_5pXF5fQjtf8i@G0iguc;%Bv1f%6OWRDGX?y0yylep9$)*B z;RV_1Dw6^ixOLc_*kcjv;oMWbj*gRW@fCWd@ohT^Yv7N$AiD7jo}_MrntSpbPxO2I z5HD?TNr5fgI&Xp6CKCgCBg z&tek#%KfOz_c{UJg6Fp9k}x(^-PJHeJDRz>wRj)pc1l3gHah3>mouwgq{oO2w!m1E z^ub6)$u5x|P`h;VNx7$I;JD%GzT(%5l(=>?GLhWph8X%YIR~%osRnr+ZMo@W3c!Pa zWsS3#pNC4E&e5ZEg#>gYqJCRu!LSQ(6km%OpXfiV9l6S`XTTPVAauFc zj>PF7u_EhuhvyETgn3n&r@4Qo;v%5OkPgSPmLYgLf#H4ffezasZzylwW)eFpPBpwq z6fAw}4zd6FxW_-=GjvmQxP?r_xJZ3In5bTVCfFO(k=EP=#H5zn+8`Mi%WS6SYxMT^ zQ*9*Ac`|$Oj87%!!NS-|e^T7J@WctUV5w#{gG^Zg>nyaR$}z0XE~FW=XL62#9F^KO z+6-Gs6@H@aJOMLIiZh63@Ae<2?mBr$^;IBA{;ea~uRv!e1uJB#ym+#`w+9t)#ae&4 zm8pH7+!P$u4brp^&TD(ooRour)O0L?t#bMw+kPBC-!~*{X>YxTS-*n=-sC{Fm$T4J zvL*pIFV4rHtX7QpxdU7ENhL%18zN%f&zPFR&EzWKpI9JjlFnyq8}?e-HMih1+a_v> zN0ttp?a1)VkTA35F}cM`XL6=sb&~AZ>52w|<4U-0*g41};jz8^GKWSsPFkYcaEo;P zuP|NU9K?B&xg;sBw^O!v`NQ<9l5F2N7Qez@B=rD}-Jk%79?mR8;e`voW6TmAi` zQl4UEN+Rc11_qBba|v^GAqgZv>C3HIfOK&cg`$Gn^=5R7c%8v+gyajgl??1 zgK%4BTojGI?yiR`+4S8sWpLRX4yaSoES!|nr3VjJ-Y`j3VSVg;ldq0;kbL4_sO-g zRjc>Qpxi++*^?2Z1Vm?9zmRYa8qSDfbv9C!upb~6>>dISsU^_jzJ^p?nLjxoL7!Jy zG@5xQ&Rh04o>|^FZRJ$08`I_W`Nr&{hHjfe+L(duI~_2PO)pIUTp}Q5_y8H#%XDF&#V@6KQJcZrcthiC?H~ zdROB+J?}5G&i|!5#^dx@%Y989?&B|IE!;Y563v{XLn+dX?K(3vCD}`+JSM+gYe(pt zHKwE%kD!_(CHyLf7KtWVQ6q!^b!g8rb`g|aFwRyd3h$ixJu+rLm~gw3mnoGr33H52 zERe%#Jh{{zH5~yx#b<>$V@2X+#Xkc{1qtoMEw7Tp*IVBU$pM9eo*dBZ#2rSZOg*HB zl;TGb0aaDtSdqaXs5~ht5R&r|ix_||6A{PC3V}nVh(Sy^$_Lg`V|(d)95GL$zKeWU z_B(7~tWy$h>xqgZSf_l~yxNQU#N}Uor;$NzT7QrJ?-Q)|Dn*TO=F!|pp?jRbJ?VtO zFbG!Uy@Au!R_>MSj!-jtL9bHe^_w5}YoO@LXT+&QTj+zpYcsiuWx@XSpeP!kYB?El zZhwt#!nX{Ww0qCS>w(j~p4wtWV;B0jHF>o>;Q(3&mHd6J36vBdOlVXCuGktr`Xp`! zrU#26Rw+fjEHW`Rat~pJkU*LiL$^aq@qaJx|5()`jR>C<6m3z7%=B{yW(~xQFG8aj z%5??-%EdIPfGF7GqcgbViX1&XAk_+xU_yBY=kSMsms_x%_wnkTXf3w<4{=+Cebr)3 zKbMoa=b|E#0*aBW-|?;XMZB~Wum?bCcIq39UbFg*uCikE=ew&45}v6yyLxxL^k8$m zXzuF3;C6EBzI)63K&7gyhF3DmX7Y!q0OW6i`5}GXnB9y2`VjDqbNK5kVm?C`BW%^| zJAMxvc_p@IMq_V|zwc^UTuxvEzvk(`RUVxcSq|7rs2IAui z5~E`}XaU{a*>Ihuly*WUR%O&~-F~!#St}P^KOl+=_ZkJEoQ`scHn}lE*7r zP7e;eAv!7csAWH&u25f;e;Db!HuGz8eA?`j^;|Pn=qt!{N{F^L`{tWFPMm#8C&?nl z8VE)1kF6{zZS`pCq~H90_zkS3O8Y$CtB2}6wGlU|1vcs`mT>D2c#tZ(L=+yje^K1` z-}JsXPA2hGhzkNsii72h0I1UGioTTIUDg_exNJwIJIs8w1^b;P#!ky|MK^DEK}qZM zD6-3Wvvs3qoORsa$?skO(t5p|MnR-@*rC`obz(k!j6KTzL22aK&lFh4m94Oq5(uXc zk#RG7%2<`~b`03J55&Jg-S0m^{Y0I~zk`J-pCuX%jHGvbLm$MR=}R)nj`x?86Mtg6 zS@gG)@Wl!z)U+)g`s<8&?Pp=`eUMxI`^;`3wJF;c&pCfR0|B95SrWFY*Y;TQw^KhU z5>ElRoZGQ-Y0|Qfw&7F=HC@S9dQ&fLqrdMyG)oL24i&JF|Gq$CW*%f@qrGjUgxCKUFXs%=9Qo>SimBMW>;3VO@#q29zYo=!(-NpFU@lUZ;4Eyg~YM4<2bV z^=6kBl}_FlT9aqQ{sawoY5;qvaQk_CMlNn2oxv8aUL?>L=|^qYj>m2D;H zWb10?KpF37=&5~9nL6dW*RCV zQ54u0uy72pL_W05V{xWKYd)V6P%WEgM^K%VoZgF&u2sQ5(Ucbx(3X}ks7{OP(x9ui z^~J%$ybw2VL5)}6VRa>blhv-PqvB|S=L zPt_^wFBWq0S4U{BjE6q;@xZWXHXH-Lm8j%8`da4SM03sQ2)5!>*-M+$kroK;$c58{ zyi~A!Q|sBIt}O)47G%i+uU|G-(bTFEw!3)DevC+5%lP%i)6h``OfN#jcd^Sm32pjy zxgxi@ytYyVl`k%OPK$mw<87Tyd#yN8Nb%a$ustxsdrl!r@_|18{X?o-3n1bxP8mWg zX+dmO!Z02yQnBxjMea6#Fw4cFM!uTB;YSQLUi_o3li0w5u5a9y_sUycN2IYH8&S~S zMh-2({UL2TXGNA~2k zK@bxM#U5wU6CQdjP&V^Zu`qJt`6wix7s7Ob*CqjSU^ixv*_d&evN@uK89J#g4Xani zvzO^lGG;Ky&e!+bh;b(anVqCungygVldcuVpFGQx1*<(2X+~4*;jyUgJm}pEFbc&I zNKb0mWAa9l3473;%1#6X z;ww$fsW~ZH`J~h+7Q{-pj{b-P{#40Zt<3>{CyP+Y#kgK-%uE{Uw6Ze;NSu9QKY~er zS#MQ+QYr-xt{LW;=Fo+Ve!SliL;;B`$Jw1H2y8RNplKhS4 zf~nI>{~G)_@y;_UsQf*OV~nkBB0}Wq0RJ;*^b9~3&Fhz!DvizO{dlTBSo>>ov-@H# zX6NSB802>I*hibFszcdQHjJ@3Y?6G(`Iq3g)whKwTfa8V!?KtAtmxh~E^pBJHj1a} zi643iPgaRKpSXm3;FrWT*XNm9@^@6rbl3nW^YZfx_bv7}Uzs}L-(-AbgSFc>_yVgx zxEV3}{sJteBwAjq3Lkx5FUqUf{@j|HKki+;1UJlHdIPZ4xp`$nH^6bHTmgO40SlvB z7@9$K^7h-M;p?_f^1xpq- zKkLpe9Y3XyUx%OjM!%*~7r|_Co+g~MhV$;FWfJ{^bVzJwMdmlfBI!5&QJ1`x)p%oH z=-72iXm?Aa$&32693otPIO@y@X4r%CCeYBqmU~1%^w2~$S^1JQ4|H+&leeeJ8x^Hi zRe0lc0_F&H^XKDE$yq-PAJA%wEaxVbkoegWp5l|$lr(VkBRULca$d=>@{8iS6jyW> znWxpb7p(F6)SO+QT$?V*s#z$NT6(O$^Z(Rkql(FZK&-FQ5;kNzYd64JvfG>MT+#9> zNDKyV1$ICh_VW|8T7IPG!~6oQ6;}@C*INa3*k(_D(O|4)qIZ_vt@T z8O&~XrEP2v(zqL(KU&d&O`NA0&2f2xx*qKH&7}sIMDN!qTgET3@5P&hYu_wYXxyMu zG6m8are1%HcV)D=n9zyz$!mRc?dv`d?mAoP;l4`ui)ZMfWpbL`2Rlst+JZ*RVn9G0 zmP~Q-DvS8w6|cyw$X)Vm*WQBoV~-(g{F}tyZe{H<&&Zh^B-(uqf&QIlDHzpd zI1U8*fbyt>Ua4zl}}zrPmjv~6KYQj;PDu;!erTgDmS->3@_qIDAisEcGB{d z@M1_5%}tfCf1ol6Vh+SS9dnNEl$1wB;oeeMytVuq!qI0MwXzb7#_c5?6rgGxo;ARQ z7ypY6K#w9nn_S4_-B$YcK5Vk+ZB|Lx&#dn(4?1|?bE>R4uc`uPpyG~m7eE#M_Qu1G zqTTO6pA|!luPk*o&?x4>m)}3z-9(|G#6_~pG8+J|Gc9~RWiXktW1E2Lg$gHWAjS(lU(b>lRTt!*Jl5i>NSU3KO)IqegucBabieKOJHz=cR&+5lm zzK;wn?9-w8Y?z=MeX)jiR3eABj{cX`W}WDMJ0UYPP>a)TbGb*9PKKwB+xrv~Xxkt2 z#w3n`9o?ePplLD(gTvcO82CIyy^cpYnU1&7@vFpR%s=or`uVCgGF<-c?Of@Q)m^#s?67zSj<0wCiH*%+ZUgF&; zvmeKdmxj%oW3w?xhf%I&(A5jtCn-xK#Nml(9q#D6U{NS4Ab)Q3;*4%<>{}%%mkD-t zRGbH5*07WMXiwfjy5ppu20>K8TwnB&&53TDeoaLuy+D1wl#;q$p?r(p0lPXP?(wth zvw5ET@m2Dm<9;fHD1{w}?=9I=)dP*9k8xR~3XYC6mdee^SkwM5ZXHZs6fg4%8YM_n zzvHB9;@5vZr2BE!(-_URz#??}FNOWV-m2Q;(aEex`N@dO&YK=LhSMjRytX&2U1B5b zkLplN^8M<#=cQMu`;D(+kGb{M#?mJj$895ddA7l-i_muhMq+aUn1J0**wRq2o5Kc?o^~uw_WE9rN{S=PQsUWqNO}gB`Dq!AHbJv+J?r z0R8Jetqo+)Zraybwh?hURteXJ88Of;OoE1?z3|mq#4O=%6T_OUFVQ;X_0Q|?tui<) zg@OiHzRA{=WLE}Y525B$#&ClbW4{krj(uiUZb)~1CDtvN-YL@J$({inNTyT8bOLz;S`zgtub@tHf^8Z+WH$IfPErf=VaaE1L#UE5I1 zGjGo(#RY_WF+)HCTLkBCjJ^YGq-7U^@B;s^*4JJAHLU^a3H*AHI|BxHcC&h1PnIRM zBqHPO4ZPDW2v_J&3_rq{vIZ#*w6Gz$FK{>F5yz!sFqai}{C4}lj5Xda$c*5-e3A+? z1VZ)PiDl6wi3p`(L<{duV+2zjh?8Mrs=RhY_l|ZDP^1T}+`CwMD^$a5=F`ALlCApQ zv+aken=8mn{;V-cMWsF#L|PRTI&6VIkAZD{d%2LErQkE<>U9(-??d=4#sYgB#B@rZ z(ZuSLCeX-ic5ePn%PJ~G8sEXb`=8EQ<5Mi?MD=>Wz*Bz$@P!};1FtT=4;WoJ^o3Xu z{3$0O+^vgJ7ufX4v(E{^%*0n=t=f2Hdc6IzrEv0D+$ZDyn148HJAxoup^e|l`iF#x ztK(%f?dE9S&`UJE5V>70SQ=?s26pzAX&x4xjC>ili(I;v!Ulu>`W!<(%!F>KXxus{ zIipXT+uY*4%BFlSvVjg0*Mf=g38C$&mZ2(154FLv6p< za`RhW+7V_;>(CP`#~9(3>c{8Q5VNV5OJIX6-2%xF{z(0Y)uZR~_z_==OSyuR)njb1 zH9=;qMj0}C#Z{Ps328txmxl0WTe02V0BL0JxO$d>na6^&_%~FuQJoITGzbi>QVN@J z8QLEN_@4pzpBa0HwjOnE`FHNljjZ`FwI-r>tcEO&PY<@e9n+Wg1sjb0O8d+wd^atO zuPo)KLUA+^Ja^zWOC(r4tlO#wEl`iRy2>6OnQF;Z9l+Q@Rh@%5rhi|& zsL|e3EZzg^kz{{4vpxj>d;-DI^Gzpl*8GTK@O%g-f0VFrZNF*~Ll{3`cb;GeabY2{ zy=15)S$y06imFmARPWKRk4;atHk_`efe;)PFqom*VR}3_-trOSZx?NPj&#uxWivb+ z!Fs?v7V0o>Fuepuwp*oxR_Q+L!Sk$aCZ@Y~yw;BF235NkBs;JCO7Pn817q!vQ=w5M zU3fP|AL7A{CYZ=K@JU>7OEHZ43I;~CM5e{+l8|Tp>_WEa{-@MRq=1$sM&w(1`R%bX zG)dLpui&dWf07*A@adslrz|T_()S+D<+|nq6MHP|1B%Jw-Yc1x`i2+lk8C%zafs&M zp7az??;*g&?Y0Il3HIm&q+ZU6soSkjAP>a4 z*&r)g!~jYO#vo`gn&T9(r;ihBFv#eaqm(>FQLAgYbT|92dt|HEH|&3Q(0?JW$q=@L zi5{ok*Gdj!RvS5%t=t7#P%RNVPM;v+C4)+Q#I$e$9sE0i2e$&o||?%6G-y2@_|FJ z__w6ib(B1>^v2v^&N;|LJOu`C%H{D>&*Zo;Hv2l{le?CZ$5Gj%baX}Rz%`egGc zU|q?o*oDla{#jn~l1$CK*}8|avA#XryqDQi3TdcmMgTr6Cn{6!r-LoKRB=Euk+~Y@ zN&c%w7W(Sg#+N5b4<2Xt;m)-pCd$x|U6-;DodsJ+XO3{C^)ZH+5Y zNny_0XH!y$s5+JcXTm{3`koM^LleYOZ~(-EI@U)$e@bYvX$kkY92+oDiM;X)6L8tdF7KjxWlD7o^y1KO{UqAoliB#g=`s*+o7@w% z&amk1yPR^*c_I6zvf%c$G3p0jws;N|efN`-bgUw0v?JfG25c)(Z%c_9eJ4Lt*QITD z+6ydp;mIlQ-qCT3|2KWF#augx(X|N3$TZxSgETm!)ph7`h51%gdV$ehsz@E}BgVCz zARD%cDPN38$S)ZJNfCr@sUY}w8ZQTsiOedKc7S8R_C%xOrauZTFRbcsa zy^DIo?VTj@4DpS7zqR4$*WLOeMd7*_1fw?$?*2{B%+XG1UEuD+?HMb zy+46#^-LYO5VZm#4kmg~87Af^R|LNGGiaNEGlt(4j1&E$zN5$4kae%~?Gh z;0buM)LH}|2+=({qmBF?t!}9ucsP=9qW#Tj?<1M^OF4d( z-j`9^&w+9#n?+l!2pn03m9M_G-y4nexVLCAnkU9TeiYk77;)c@b~u$t*#@~RaK{K} zsCQK8S64&bwfZ>4Js(7<%+X07E$94%_bupRrzwwd?Rj`1OPhh5C`0 zOz*~?uS>uheo(I@X`*x*BqXFY{G(J7@VUOH~F9J&n8#%)4sI zbZvV|dqOXtE$)0hha1M|YgT;II}T6yVt$OZqL&lq5ZVc`Wi!l(c1$6J4~`R0V$TVi zI24#TXI8w_qeue-UN*^+*>_Yq+~{U6RGWG(Q?ES;+J?^))(`EV34OQz4c}t+hfmg9 z?k)$ezhlsWnAW<@u;ru(kAoNhx{!Tz>Ew!P(A?9VbQ0>Gdf1CN(}c{6sz2CUqvlEv z;^yZ*lTu1@Pw>|MtUDu#LuA5M1i4P$WA8&A1=jvojCHzDn<_>br)%RHzU_qjnHyHp zWG{ZtKajhYy$T+IUx|N3M`PqkVW3l^R>6t3Z@&OvBnKLdyG+nl!R2w!yMKyud{I?1 zc5h|nY#~eb;tj{bw%+)-mlSLY>5Q@NgZtn-ttF^F#hK|x@s6Z$t%g|49X>#!Ix%iE z*ZKkRv$^zg!f(_54G~!)$yn(2#;~ea$bqA6J@i;WhmYQ6*s+^2wf}qrsFFsOoq2X* zF87xVq-@yezuA6r1D-(CLFE@n1$(&#Ie|I9NxE#8AKWdf!LEiEqX2iaAF)TP(S?9W zsDaelzB&*SAZTsqT_nKloT}0^bNVT;x>8sA{82hlcU^gHjKU=#6D)aE85Q%MW~}R zx%Hr}$2Yv{?lspr<239u3u#9oG9wouIitHJEWFKJIMGT+2^3OwQaA2dd0tVxG~)M7Ae#-A`8{#WV#Mz>?Pm!c zNB%b$P6Sy_ee)bhkcIi_m1)oCi7h&t+?q7U144(qy%qLX**yU{rlLxxm;3TgcHU-{DF zNF(hkRcS9u%LK8Rc+O2RoK&Lfa;z+0{X|ko{2*y5eO{5BJ3cxod%7#%&b=r#_uVDm zXmMnr;b`xVBl8Wt>&GE$j#cBDjcd!ICa!}0CG*vjIc!o_`(bxh3NI30gzMuTj3tUU32i?=L0BjQw0 z4G4d0==f(jtuKoY=O#8e_+EF3tL+>_Ns1*!Niu{yEbqWg()T&1BjSH;TMQLfTkJBh z3C{AI+P&r}EN;~s%`QlJm6YSVcY(Yp@N`08{1QJvo{9A!jMk-5zx_CknUfeaW?H5R z`SE8BhjTET!-D~y{_J;=#=pt#zh&q)Y>AaQkPRlfCpn9bOts{&vY&MeF3UxnXH?vT zeUnXImZQjDH5^g*m?+Ng);ynV8z76TncZY`+{|EZ=Ax!9=Exli9}lkxCyyW}AHN2# zh$ug==u<&<9v)F19!aWw`F{l1JDOQrc>m7<%5vH#$bgvN8SIcld!guCxWG((&GhYI zZdN81zRrGrzGiNI4B`xbp3rxKxwu)w?0n^sC;nH~APgZe(m8ZXOQ&=R(%p^H-Q7wzC?O0;h@?Yz&i{x^5j^D6N3a`#K%61c zP*GJk_|6Y`+tq`!9AsG02baDvfE2euKo6sw*Ahu>QirJH*M;{vvIG z0jr>tg}A_4^_c+AsGYC(fp& z{6-Pe%*WGs^%NiZm)!b8i^3+jpeZ|_Dq_IspsZ;`7q9rp9qTFa%~Co`5n-o}gD8Au z!ffLKB$hW?h$TK6X!lV=sJH>MGQ)oAQKV}|NXY|IaiIThNS*I^Gv91b$OshPxv`&G zs*-o(iL>_-Hxu6inYGFp#5petMO{%`|K8l-UEU38ku?I@F+cJqo3fT;fUX?y)AGnM zsZ})u6q#|y6kD(95j4#H)Jw27WtQIjonvMwK!p`OPL*mXQ;bnd1_j4GbMJpn(-Q*9 zlez=qlF%G#l@l|d0HaM}D!X~%3}UOuc&F@8mV{!Pgkw@8_QNE-U^&9YiZb@mT)ZIa zwt-e`JAqT}gkEluIbX|qB=2!Mf;)NZL&!*zHx6-iYXEMmS=htmDAu{;tzMggD?$u& z-}nAq&jWanB0ao1nK~4nZ<@OYiT7>mk)E4NrmoMDw`e~r)*|sXQ-7~^LG&lRoi9nb zja<#Q)O*w)P>9dwXB43y`eU+>T@^pA>3WO}!Zs}?w~4CYJ^TQHo)hBkax$j&F!W;C zNgZ(H&93alBa!~&A^R+T4?c1+@(*IQS|jOkh%A> zE5Cy73Xk&f$HZV-be)RYMAQ7E(dZ7Yngw6y<4B9d(iJ1GC80w6$0A>E9}X|N_v_T1 z`vSw9Z+7M|V)+nFg_svrjm74?v;RpQoy2#rC5H~|B9@-X5z1AgGQPWu76zF1=@({YPoFKZ=nbI<9sRSm1}7zkhSNc1LkNtklI?v3q3Zr!rSBAvyc{bGW_ z5wkWuAv;(ApJ_y?sxfWKP-t%t*7WWOkBcy9>BPDf^p}`%pf5pZZ%7<^W9CYSwnAb6Ai@F z4E#+vpaN7mIh-iDuQ-h$-2h3#-h)|r1IAz($KSZpmPPjFbE&?IlbE=f5B7m6Zp5Q$ z;2CjwqEZkWs1UNL8G@RrNTTXYL%_Df9LTeHB?CzHKD%_;tn*|(QReSNjlq-4sP&74I`sC3 z#cvn2>S+VfKw{N7g{H5ziVRTIWM)*GkDNv%ZyEvyI;C_vYvHvjr0^-&)OPisfZ4q~W2%QwFRILK=t-s3}5^~mIB``%sx@~PrW??>c4aii{$+NXvHKD(Qsj}xbz@w2gu;mV~eP=jb5 zGkAqLa{y!z9rX}O^u~{hzNSl+O`*bs;L`R<;0l#KkCCg`bOQBu*B3r!k^>nDH-Vu7 z39SOBdjmKsBQ3EPZ)jL@o=Z_%v2sE0y?M{Dz@nUU@64?Siz|uyEAMv(Kabj%^3ydu zIK4zErOwo$tkq*tI_zI`XCzfLN5z*O=R6VYwUR*09L2*-7KV94{Z?|=_&Fyu=`G$$#z%r0*zF9@d$Xb9GK6lVevgGTn$ z)Wact%O}6Dmfme#y)k6mmeBA~V7C9=pg?4rPKGqHsqF%%xMlB7vi;V*fa|8qL&utF zZY1YWh2+vQf>7_G_|!m9#K~LVt)FF4#11tnKV-d`)_z(n)}IRr2|qR}lRYI?yB@5m z)m!K@#5M#}hGpV!s@_$t^sHsAgunoFeSf*Bz0!S4YZ`ZXPT?hy#uhY4^W&73^}cjF zIhkuN`9X$v%Fq+*H?M_$fq^+`)GTLO!;N<9;xVJcI4`2oT8Z}%7LJA&W3*)A%icTUl^`EHZ{3sqZVckJ@n75mfoK{cUEIS`Ohoee0<^r6LQ{j*y4fG=)S{l zGwa;vu**KDHnSevau1a-?czl3qEwH*^AS<~>DA`lpO(6D2Sn287-*P!vAuz^da|dz zI@8%R*1c7iq*(Lu%o?lfY~ZB|+T*=aj_^`v|E4_mX-H1EBzfk=W8NCe!7hl;QRWe8 zB@l93nq-+IM|XvjR;E{z!S&A?2kQyA*U_%m(X=IPB^?#1X z?cF?ZjV@V{Qit-z6!W@Fy!&KW%7mN^f7V-(k!St(sQJMq_`G?A*svv#Z!;K z>SGoHsR@X)oy+WL1@$+f3@1`CDDq;WFd56b={l2`V@&PT9l*h;W%0x*B)gUs+jTt- z$&S9t`$vN~CJ}_E`^tSeeX}57&0R15IozD}^BLp-jIVpEaLI zsWi?Bx`WUh1H@$WKK_*I0A^MCRx?@GUZF~czgC18?Z~W>t8x4kEJi#bEQfs8ed7Fa z*cqm-ZE*=P-NV_A2cgV5(LayC*i_Qmw44yIE_D-C#@@6AFG>(6OD1}hYzM+Xlu*(y zS(^9L@}zR_kt^s`&$n9Rs7Yxp9;WU?aN=|I-}v?VpHPDo_yP+nuE|c>k|6Z$JXyY3)`nCj73NoFPff=y2<9a zJj6#C5`(G?ms;9M!uuV=E@L2i`?nFGv(>XrzO;YSO{VQzA%!W6FF(OFF2ncam?a=Dt^xMi7D zYf4cRN8%B(I<01O8#b)(90qIzLdB47(DEm=D0fu)lwhEg{?Gf8L%shoH?bkF{Jd$TNLI+%b--uZKF78zop_<3^t#fO4sdxR3GeuJ|DLqGlji?k=C16Hzmv2fi zqG7wxf5;OJq@2_%-E$MI0xE}WlP>xf_pKPp>fMC+!Y(9dTN^t{b{#Vna^m)Y1o1W} z2$K+{Kzd!7Eh=CYFhW?`u>+s~`8G*;buh%g0sl75TQ8%09m&WOZrfB*0rVV_&;o`% z0eyu3->{jOFZ8z&)?+}nG1=TSt*K2@cmTScn6aU!DQV2C>V0QLXTH%o>uvpo`6?pV z8FnhP=u0hMg)#!}J1R2oT+nWK@79t{fsq?Ok+*wTk`*uB$M^6lN)S|J+WRDKpx;Yi zG_Wx|ck1yLu+X*%ZZ(e@I*2lO_TR{zvWkx9ku4W~UyL3AbjXm3S7#Z%MAl9U4n-{= zlRSSDRzEBq3@K3qp>+}p6L;RLcefso7V~+F@9BB|`}x^c9{(Bqgh#U9tzSOyPqM&g z&!1(i%xqcndT;!PiI9GKUMzYR|hTK;#~_Fv>yM2 zTo(@~by2`>0&<;UyfxQoa@9Oc?+i+O_n-dpq-ha1IVP8(utdbwBQOYX@qz|;i(cq} zQu>s+HSr8{QsxiAyVu!Ww+MbZfrR6iY#SEOD7H+r58IavnXw!VLcFm4b;)Xem53UN zy7CN~U&#~CUQIps;8YxSjep=v7G=m6u(c!3v~KH-Nj}yl%o7ykW8Kk(BbbL@A3H6r zvm}KVA3FGp?)VY)A>}KSW8_mZ^?q|2Jj*b1UH}$(12J&vv}D59Paz72{rZ7ClV$)~Pa!^rY{5Qh)cg6A@b?rB=K~ zr5P8jSeP);@q<_kX_rk!k;Vcal;cg_PVk(gJ@$j9Y*1v`zLh5_$=o;mb)Fh`c>iPF z&Q5>99yiApR4DmHLKAF=N#z5PSu*22Br33GisdCdt{&mAQyN(Ia%;EPT#!I5L{nrO zie|ic;cJrmGF%Oh@g?s}m_t^&ePjb{)OW0nx5FBp57+Y0e)Vdk$jRy(K*`o1D4R$` zCcrq`i0U!uI=`kmLM)S10O_=pR@;W>lq%HwhqyU($d{&YYG?JQTA;6k0!!wmeyTam z?p<%dKEWeXJs)*qBk#k}RPN#D zU;@BQTyQv&!qsx|1_e0uJX+{p3~p|YLxCpS>`dJZW~_x_TQD@9phKcL5dO`0}Besc_Bb(yR zn6o`CkP9v86tx(M1}=3qxQW-+q?tNW>aPN@6#U&8z2hXab1%EOPS445fk@3z_353m4JS8eURXi%P)ZzU6FlqKD z8P=Yrk>*l$vwd@ii9x}$#?)O6<}8bp$_``aV_)2$*>#G-r;?dE`hCl9k+EyH9OjiK zF_j7xKcpFHvwe@`^%nBtc{ZM}+zinV-`L5tmUQDSxv2+~R&JyMyJ(*yA*W;1F51R{ zth#<_(omWp=B=b~qnusjHwF#Eef*{fs`@DxDgTY!46P6MLd&LB%lm09+3RBzQ)uoy zF=SS;uIk2l^tWAu4`6s@gl(VPyGyoc`}|S)3bQH1#ILi$ zIZejom)nqA&a0`r4?|;?`FcX0(DahH%oMS}IK6M@UApaeuTy!CzZ zM=k>(WU48H*|sR#l*G=IOyqQ74ZC6ijUv7>Ptmcpnh+?o#$)XGts^Tn^%Xo#xzgvi?%YRp%UNS45hzt`#z))GH#pr^IWzIhcc-de{X&2&+RLS;L%g1qNEs?()r+zq|75Sbg*&$fk+>o4D=?8IdA&byVEJ>&D~RV+)H4TW=5 zHWjMJZ_0Fkk}A!o&q*@svHkR=7rCQ{0GIDf{O0S>TTFOV8iKl{%%}2og977fQU`_q z)cl~HX7yy3_V0OkKT=a$;wOrU8KB`RBN_V^?*_neq_Nm03$i&<|MZ3T=EWv1iZZFC zPbACz>Ad3zo~ObMg7@!bbEN!jv+iNjU$h@8U-sv5;M<8cM!F_0>QbiDs3F61+SH{+ zT_x+mLa3KX(!;7cZ0tcQPv81z3sTf9ooz{ZtPoEEI$wpH#-L4)g zvS22eBlk81(jC?DE$JT8X4Q~G##Vko_P}D0sENXrnRfa@OvE9XLBmwTC)1s3$t)aE)>tJ%R75TKFoZ<7#X~; zJR10X!*lZ}6tVw7@jMUe-LmNP3i4ZgMMR&rrD;hFQ+P!b;uOY3?ogRmY)*$#C01`! zL$~5!oy>Ga%oleU&%0`OjY0B@N`uS`LXv$wr{?^xq_18TI4ebX@QVFKgzdrO6~?MGfu4Tc%$I z$Z33&7V2!&eSW#YuZWJ`j`i-!tU8uf-LswQ~@R&g7V=>KH6Pj=KF;$A$S(j*QbcDpwBWkCh^}Jyui;F!J>)8@&L2aTMQ&<-|kuC#hJCNCFY!f zci_(W!NAHR+o_-C72jB2&)B8G&6u#9`!7i2hzk<>9+K9Nzo?0mvfEb8-ioUiTV=2P zkf^!m;4yO#G==`T@j5+LcgKQ?E2kl|{^<7iPh`iPriH@LwB8FA?7*N*=KvR=_-@Wr z5|4z8ZN!Xl=GTc=6?yM|>TQYIBex55G3y?QTC822DQT26xuKg9BPlw&Y{SDW8F`Dh zUiRbUG3c`|iS^O&w6U^EM70xK&OA8rXyr?5Y43Q3w&7a&9Zm0Ag20y5za{d?`%&-> z9Bss$gAH}3K;YIK*TgCLhKL<44Y;*w$@t;8%#5WiSo-OdmFIemxq!tpQ$4g;`IEE~ z#q7!-e8RydK}><{JjFXw?F~5(Cu?c=5qd@LMCp6q(`xZhPU)_m23SD$XlXya`g;!- zY`?BI?ZMmonnBM_OJ{0n2L7=VmdYK*Q$$gzd8*-dQ=3^fGV6t|#uCj+{(wZ$PTU&Q_@06>fE8~RucPzqg)sxoq zkMS1K&Qm!U+Sl0i^_K*Ak4L@#L2f>-J*zWn636%A%EM*q1Kh*KP4M^YM2VV45gk36 zIedVNa(5!Q#$#rN@74^_RBYaF{#$4)<0NnXnyMAd_%XI&J&R0#%tI|*vW5ZRlGDdL z?6NsIeWP_lz#S9H+G~Q9%IV}ndQv?whV-Z}?EMsI-ky{HFX>FcJhIgn+l#cnQ=!S# zjP#^`Sr3-`u;u>a7%`i^LZ?GA@adO=YPt~3h*o6{#bDMSpH7>E~>%z%thKy08<1ZRJ1nJG;8 zvz1Gkc>giM%4LM@lw|q7ta43K@;71PV3DLw zq8Z&D_ycV-mHXRsSu)WXbRQ$mv6nXzZTj5N zK}4(ko$!Xt#OHG8ZW8{j(G2o_Wa=p-GuO)IoODm>?|f9fn}TEe$?uM8!w4=@0vmq z<^yKZtXXp_+f2$V`D@6cpn)FAZuOW(a^jc(7+#}v z^Q8cG^{Dzq`rh%knS>Z`i?2Ky2OFj~lH1wQ-Ih!? zC;8vBf+cit>K@};XLDM9*Se}QG0;8|8@v0xcZn%SFu>dCG);Cg`Ly5zLe?s)c}Rq# zmaJVLe)U&5)=0XyG^NY7DYfDhe7|Srcuvj1a2=bvgL-}D(K*~-WmAFx1nAd4EY4A+ zDr8bjiHpN9AuWdqwfW4D2r39Dc^Ijpe$1+n)sb*YNWWiZH-WU;8g-8m+;UW z9#9}_vnf>9q+)}bZr|^-zI&Y)P|=S-H-TUYK`P{P8kcgTvdFGzjg>9)={_a_Hf>I! zYV;qlhbW?3DEzR|&Ac&Qt{_yRXo-8tJOKGiCh8BFPmKmVSKXJ<-P<-o5ZW&Ue!`Q@ zq0w1S`A(}JIH_Acm#&L&p#mH13mK4**eJ3s)6tRsWC>>rvF%Ec_iJ>XI7lITF;5lS z!|lH~y?!>VCW9x<$}w@6!d+MVA?`i)+Uq2Dr~FVkQ%1B}<6q#%Nzzm!TE*2_2v=nb z{ojPLMx7$yI6!B(Kd-yX1MHrhkO2zU|0a57?DEMvnDupN7o_{k#=I+RmP{j<@z*DOcuCM)Hx*j-{9k3tNx8LznP1ZX3BY4$MyB{wg?hjs#2YD(@ zHY;L&tbZr*Bxjz}41bw{%vsnhxMZiP^$_`W@KxZ~Y2yOIhS^l`8yHVnjAf!3eOE?e znk=%saH#I*$;4hLow?qMFdb>%H4j-vMo1Lbzcet0T#7L0G?H?M=-&C7UMYS@e(1EH z@myu`RsDz%xB${Cc=qLEeG);Uy+i<-G-M+NXI>@PyN$l9X~`OR_;()0`Vj_V1|a37 zB7(Cx%DyFaH0sS0Ure*7o-lEDDa_!4!c0~^n#1>8=^(|ePe)wob2vF5+akeftD})& zt)dm(f)8Skn-v=H=nn&dHs7~CJ?3h~MBBvs`p(0bQGNC-)m>&n8*_DUB3XWl2e8K3 z`~dPEYt){Q>f21^qrq2tPc42Yt8`_z`1E%eLbG1O1&m#MU&Al!ohJfPMggoAvYAh- zycwSDHv+Wk>c}ns5c)g<;MMwHgX@y=+OCdqY!xv>`rIf$7E4ZGbgbV*MT~e`6Y%dd zpwXT`!rgtc`UqlOCAih}R83g+!_t8tk!uJpRWFX+tYN5{28+{t;Y`9OR|SOiAhGg; z3Zn)9X$*P5wOhm34&aqT3=d*Zc%^48|K}oGaAAN?k2QcBNhTqNkALz`-pFiPjaD*z zA>GSI*1v|$Iu?7oiCh#9C%@~ME~-LqHS0eK?09ygUsCuXo`$?(UZ8dRCRts`X$OGX z#5qnxisxnmYWvlcfYOU29Yv|bX6v`k-q!;AxV7E=Pk3vnv7mTb#-UWypofA~>3I16 z;gZO*@=mQx^FM%N^z*-fqvU0YhfGX!8^yRp$xYvCe6!5b!|4mC$!E2ZBxsVRc|Tk7 z8}{G1gX`d6FaL-_Qh6%9WOejdFU^;uZNOwYZ^Ymg|V>kZPM>y1`&B*o8%84 z>W)X|u0CH7WtW%7l=M8#Nwh~kT|A(4{qZ$E%y5wJ``Xv%xnA#z?59foydm0|I5pmF zJ&1j-58jZFwm5#_OQn(j;K;kA(=yjn4O;)9jD{$bvASf6sFGoc6m$5iU=f6lbnDCc z9#pu=17@o94PqR?vG!H_U#xOSq#w7H59MCe(I%;lo4PwXKKcb_pezSnc*lmyY;7M77swAyH zzq*BW(D_NPKzOMJKTSbx?9;%ZvlByPkC|4u4CeVWv1?YwbukX2Jb&#1we!JHpLHVjBIhOZuptLum#&@;p-DaPg!6bLM#o!se zgl8O>gK32m))wtp8zk&R)(6>rKgm$f+Tqjc^S0jVm9*Qqn}_RF zq)k=i+522N_G=5feUHI>hNa^6ABEb98G$Ec3g=Ro+!JYbO*V83%jOliZv8?I7kNYf zfRC@-sc`vMH-7AJnPEf&1AlVyDO!`ewnR*5t9r%cc(=wlt>FzFi7-s8RvQy2`40nv zG^7a^d*$359_{WbkC*|KZ+>FJ675p30XGY7tr~9`g`{k8W?{m(m90ZJL449w% zwpSZ=tE-*FcG7AW9gslEB9*_2PRwkt= zfK;S7+7!Z+o-H=;&|q)Zrxz8Aby;xS-|FZnmpRz0;yD&*pJ_Vcg!#mG8`q3fRdd6PQn<`GAT09Wb$IKz;nnm(pzErHIO} zg13u5+W%_dqc#1?uIiT9JhBueGmO7GaXSKUedGFpxc8; zpKWMCO_Hl}jtPrZ>Oj?{a%kC0H`sJVoom0jl2fy%>lwQl67P~bRDf+-MAJov#7dG_ zXUC!vk@!6v%fJpm*=iH)>l3T6IYm8QP5Mh{qmOCpOe>@T95OUMv0r8t~B8Xs-=ShjWWuc2@ ze&05^;)Qqm1FN3{gpUJ8!W^&xs7=fz5*_=dnTuwZOSOR|GV=8}vLK0$lsDW9oCLTb4}Z*<)cioMUl zr6ZIqn6HucZGeE(>^lX;)~Db#?p^Ib(22dTgqbVIq&t-7DZJ>=J3a zu_RP6Df;-2C5{!Vk+-4O2%iYaG>Gd+Pc%&opZ{GQTs3UafYy6gP}5lKuM7nNps$@Acx9RYY^!ux^2S z+3j0hu*#W>lavw|V98>vWp1FfXF8jk;BHcm<8kU=PFFLC-7hiB<-pn6od04is>Q*dR zZE)#87NxAlR61DCirecfSaV&iCg1O+=vmQ}rW>7dWb?>EZ$3Re#B`zsQOlWzJ#v}7 z6SA*Gl4%m?&|6)qFy7YHj=fJPJ@~2_z|wk3nrTd(qlXhd!Q2O;gc6y)A^3!?4mBAs z>i0~Nzw2Kgbp?4~&g*QJ7VI7p*G^;eNbiT_nUs|fEOC`zwZAG@02Xq8BB-G99}KR_ zt`YLmqdiY^CB0N59KMVhLxi?KbvMQ+m=BRgG21d=Dx<_iPWOce4;oU{jTJA^h%>We z&-?y0m>N< zjdl5uv{ng)dde8Tf(?@gi zMt%WXc(~yAfL@aw0jMegasM>$sd=Lmh;vdDIt5z)huxFZ<+bfuwP?`nXhsO*%XC!4 zK9PM6oM-o-ub&t6e6N=hscY2Zc4wdGIC@6OV)G$oj=HQ3Z$mgBl#(mr3EHgw!n(m+ z5>!=hXIxiZjl$516J3nrqp<`(bxF=TkrA-F%Fk#q&cE?=C*d8VED;|a*A;FpPZbvK zk8p&lzAoqL5jG3|b1I(()8%_#Lo89Xb}CtP2z%*KYOUA%?2iFrJVUt3geSc*I)6=q zAOpLq0dCmOj};Ok*8Gn)1I;0;FG+x6cc7EX$SsnnLw9z}Ir0^h?+#Bg=YhXP z+`&X~um(uGfpl#Qo!=T8T6@ZQ8+b}7D0ou}QBvv}S(uws>Z+SMI#cSJIT{*!>e?GR zI$Ic8Q%V_l>N>wQbvE?&_N4sp?H?*eW}f!m-jrN|0ibyZsRv0r+^l*QA#nkv$k^tasIiwKAvU+?{QG?9Zj9=tlv3X a*x7=3IXMq~e$WHIfTUk5Kr6)F`2Rn$8ub4F literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/MVRNN.png b/_site/assets/RNTN/MVRNN.png new file mode 100755 index 0000000000000000000000000000000000000000..99d9100d14f8b07913ed14f8126ff448197e8cfd GIT binary patch literal 13148 zcmdUWhdFovPo9iBP+6J_TI8Xc4Q0Lo4!Uux4p9W-mBY)=Xjpy zkNEw(UUi4upU>yK&$-TZUFQf>R+7QRqQF8yLBW-il~hGRxt$IF_QAXje>dtr^Mn^v zM+rG~OiawFg;(?NDY=uBrjwejnbU{24yGvYY;3Jf*&U4?OigVZ&263bP+LV&P-s!) zB%iChrtQqQiNCbCM%mA|skKS2qGw@HPbBXTk47y^F-QNW>P$}nVu*&uC z{F_##wkInmr|tRihR5Gs>zzs`6pI#-?*4unF>lYg7O%Db%va8A?2lMk6<)o1$DNFG z=WDl)E=N*a%<^Ngn~&CRngnE<{CrDubDlFf_(@fgxSTp&8zm*Bc(FrQVZ0DEef`FIiFed+Y{^t?Pg1$gMnP`i$DA}=q$y}hme`nBzJ-2vj+oHA9^ zbhG!&ulZ+@i`1ynhFZc$G&M%;=McF{^NkH>>jf^ z6*4)gEh8g?7eY)$RZ>_e=gcq86wmxh)Mj%#M`-p#f8%yo1GZrYh= z6%;&zhl3;J<*hC)RZ>vs9UF6*ZF1w|<3pnOiSJ+Rwc1YAM1CfYNl19Dsi~=|s;Z@B zVPbMDc6}r^5)_@tX^@eg-t2X8+TwLl!?%qwS#99+@1#4P8UEJTn{6(P%JpJpWi2i) zCbsv5GYbw5_Vc?H->t5x{_-{^Hg;uYWvYPVDm>iz>5hWj{SrU8Q&$fUc>0jwU_2Vp zv`x)|q?DA+wY3jthbtK(9!lwg&Rb(8W@cuTcx!`SWuBV-wzIeY)9g_j9^NxqZTU4j zJBnH)TrFDu7CKVjpN^cT28AnxB6~nHQuLD0sbz?Xm zpIVsTeo?Z>*kX=?fms(bhQrbXL4qaJiTI;D=TN;V=#m)!^K_AG| zvev6`f{O3o`9JJZMt4~{pd|G=J3FU~`CQF5dk{*%aUk%YJbNaCiH?EMOD`dO;5}Jo zVMLvooZM|xe*SlFZerq5v?~hZ?sR>-;ggPQ@<)DftPcaZxw)7Bl0eAY1uWqNJw;o) z8{rAOy}cm>m|3_F_+n0SAvL5h(b3RC-7evg2_+)PIGI>cRK-*BtHZ;wUc7jLiW+cu z=m0@T)gAWK)~49Sa&&esDJiku`QFgb;D?H_*!y3(aaT%fMB@qf)2AzhIXM>kv;@eD z3;MvOrlw^=eB~xc-ltEWva^@?``<1yR-r`CIfHCpbn#mH895e3m&m5y=o0yfGl2(z zjph%jLLUgv@b`@`QhQ=Oyh?K_wOrHVJAS!{rpHCpO8>(F`Ng-2B(!( zO#@qbRMpcz4qYg5yey}j$jPV6G%TMQE|ULMb% z|Ch+FqxkY=VU$$ibmIqnx(>r9aL6;0TphBy;^MykZSR?AX~p&QyauzSczJo%I|tBm z+|Hq(vF_d_2gJ#iiYSN*lDDw1AZAi5`1UQ0*K%xkrg2Kxdvm1l@#DwR($ZsNV=rgh zG_Zt(gzC{AlR!q03~N6Xw4@y5nR2p$!jzGkBf`)K*2R7f8BKr1UNL6_}` zNiGO6Qc6xP%qaVPUt62e$>!+ZT~RVYJn$vEt(66ZO7yJ#rWN zgmgc`K%_&H*lWF}hBErZqIMIJVvREIi&J)qUtL`?k_KMHY^23*9A(AD9#@wF*#=N@ zZf7 zblyBZ>K8jY^R%_KEi(3lLJ8E!t8ZvfrNT^baCC&r5K2IuL;J3j%(N~azh}TwQCSHE z4RE-hN6{XP3*EgUFOR9Z*%LDh5ZBe+os^8spx&Vqfry+m^v$tLO&4%vvXYT(fckC1 z%XNzu$?T7gjKsyl3XhCT=Q93zezLWpX0bVL;M451q3Y!1^!6=XbuY)fMTX1nT;62Pp$Z@e5i#*W1b|hX10P3{rHP3X zG+3s79V#K0WjlT#HZlKnOad)0MSvQyvPNub`l8vT?)<4-`!MGvRNqNXXi=MuI{Oo$f)z+0T8Ky1xHlnCZXopz_S&NaIOY$X95#P6M>@Ptf#3kvF- z|1SVw_bwsoQMShK)RdG(?9QF3T1!hy65>>Ty9l5Td_lc#w1{C@V`KW=@{q3EfiZ9+ z-gGKQn7FvM@UIY!OYUBwSjXt zPo19=VaP)CbO&JI6G-6PF&-&Ud5E`kbY{@vsiLk-^y6((LV^mvEy|E^q_F#OK|=#Y z^+@d{NkwgK7pJJ#Im4<9@Um&ZwblNW`ucjzthepKyk$E6LPu1Tl)%0os?{;PUg_YEu?ir-)}XO>b1pi4lamddbe=^x5BP92MDxB&y-3NA1-J_& z2>9>*6GN2Ia%KEis#2I?0vH+X3xFMSVX6$d>C!D*LUyU zJvlj1@47wdJH;judRSp8FL!x+6naS;CuBAK-|r0n8I|v!M?gWSk)YN>#aIM3F|62kEd9rpFj5lQIW!Hd3BU| zEqktY$%AkGMrr>?6C`jbS8Pm73_3O`Z6GAPv60a|va|`iYs!%JSRR%b^zfOuje)Ee zS65fhpFan3ar0qs|DRBF*b*Z3nd{wT$)u0Je*HSx8n4u31Ec~Pu>1>X;ItW`1k?;rfEJIxrO1PUF{lX+-8x@_ikh0S z!9j~dj*#{+AdWaVit_S9BO|(vE+u7UoF%*CcWY{D;8Z+C#TXfbQVLsci^sv$3kwR6 z=jV`UOyhprnzpCQnW4h|f84S|{0_i=s#(yOVeLI7nc;S?Mk9BgJ9 zq|}4|0G`4<0K5z=TR~~UVb)b_A8!mVEiOK0W$nbjV?e{H-}LB8rsH50@c{DqPQY7h z>t;|4&}F-#X+U#&j268XaX;3qvC;x^oXQNcqBrm^F_Ti-ePZHDpX)2=O*gu%p&?~j z`TA^)CUBt2V%S6&ztnxI+2aGZ|9c%EDoRRy(27Dgut6WzO91&U)vf3Lu&Wyw6C4Z; zD$wY5d@UKL{*&b239K47PwX6%n3#y9Va~YX)UBhv%D3}qvR}H>vZSt}a&UO)ey}K| zqC#kolhlhqAeukyWq?e&3ODUdY<4|Z%=EdkaHs+9pDEyYhxrQhb?48Yi3tf|G&oxO zwxCsCoT|;#J9hp0B@2)2+V%}PZx(@whzQ&$2}#02dkB=Z&()>Y>7MoApUQWH61Q&M z;?C%rn7B{3yt}I;uO8>m^NK>ae-TtcT??+nufai;%VFs2QTD>yrB_fku;{3wswt+xUw=qpCY&c8YYGg{u++cO>Szm z1oK_%Dqp3;t`C0v5T7;!O_7Uq0WldEQ(9UYYAhW^8P@8LFpJ>-Ju@@Y-X74kBu99M zMON_$v`A?gL0J5JDE_<`WYW1%-t-yyxjt z$KFKt#mrlOubg+M$@|ITKl%ARmy?q-HO*Z&CXlGEt`?>)tM<~v#=-)!U#j2ySPI8s zwAxYwMBxF(b8Qo7XO@-)R6U3+@fbw?d!R&>|apUV8wQVt1o0IoP2lIW>#Kwq48&}+2@cL-~>0+pqt zq%6BY!`uF;K#xYCy0@`66cmK^t`MDwklZO8`G!Akw zTEpWF z61s&dn+kVgS9^PVQ(HZb*KewixRYGF$s{m1IEwsfUb8`C!iyvhy?rdoG}(J!pnV|x z+ya%=)$e(i!_l1Tcib)0_|$-ahPlA~yJ9Y-cEYT^tqnM_%*Vy*2{XtxFtbwkMo`wJ zUGFO$%qe-Nlz}>ZW=oya3#@wwx<2#Elt6;;h={2cuf~iF{WotC0|MGt5vgH9e@D#-yLb{fC`S=!b8X$w9gP_tKFGY(H)h;a3+8it&@E8Czk(0Y!U;P`-e^u+E2y7g9 zb*PPwI6H{kzH}inGBVcfS%Wm{i=H0&rY7%+?{BLfTbrBy{{GOPLcMxYY1+*yKkUu2OUcoEaqT(f*;`mBw-{#9(U}6% z5=cRAP7bB8>vx4Dj`H$yZEbDn(ErX)JT8&;&`dotpAOYKzJuHF6Zd6(3viE1W-JjW z_K*N|^RUhYlJNqG1l%Efh3|2?tyS;9SpJ<_xo=_u1jhaRhv?={5de>F;G!`yG6JQ^ z%F5btGSoLPnCt8e>RQUp%L7gET0GDK4O zQ3lBMp8+$~j3AYENa;r)NHEb{5pl7xgsBK>G4IRMovAQuSwW}u`HhV;u-2{*x@fk> z%b2%3;^X7tP?4T3PBhPWc#cm_5{`Rv%gUlz%N9!h3&DNBw4tbXdsYX%dm%11Bjf5| zsS6s`m$9zfM@L6ABJNd=Yf2j}6u(gQe?~@pWkFFoV#oWvwz4wQko7= zA|j%ki}zm)JAx^&EC<_Um5#tows&-FFSFkZY;G2bT`GqTO&194U_cn5p{|aGfuT^& z*8m*g`*-C{p1Fa6A4>;0-s1lSc!Ie&v0%54U6`4#(@1GGT}K#A|7DDm^#PyN z{?XCbcj>>%NaMRDCB$&*AKqD2keAs_GFo{>wZ*U_V139tOG!Q@0Fv zFfs&WTYL{jQfP+LDhaUSUG zl+WG&dj&>5p(9H!?ZxW~H!OTi%>DiSfw3{BC~(8}KqTntQQ7|nBDAx!17_OerCQXX z1#hatt}F(BonqkKt=|7S1b`9mN~T&zE0`FxdLtI>czAeB3Ky9@QmtpH`_lxTn)ZBR z%GC$a78G6Ye)1lE2Ec{BPh%Q}E<2Oea3-X&t3c$z#EAIoPu!eF^h*NB8reereDR8t zzUPAZ)z<}w`nspr;BdwaYp{OaZuXHaP>|VNkOt@zwh8CN$j!|Kn?0$q(IdWne+ES~ zuc6_5xjWu+yflIDJ*`Y#S;YOJcp4+a*}lF~k=b_&x8_oaa|?6nJyNBvXQ-i8`qBg{ zjXJRiyT1Ri^6~M3YT)Y6Oi2+mAE1R%Ku-_MErwPxVCn;jI?&y{0d~~p=4OYIfUgsv z0HQlr7?118oG?Pn;}pks{gW7fV{V~)JdBQ#F(A1a(Vy$J?OwsHM#7E({}&JAIIy@( z)?Nj9yb%u?4Y`AzkL%_2?-uGE*>k-LyAK~Sg+7!nPJI)n{gGR^F$}j19aqHdFlXJ^ z`}*HsfX1-a`xyAYzxuFVo0ynjwqBeat%31ECE$Se@8W@Sik~{g!*CY4$?O9g6K>My zalI45SA#rzxOg}?FvipUZulSQExBveI{{ft?1do=!$U)^!WoH)ZvZV!@3tVCix>pE z!G{K~UW(?a-MlzCr=xi3{W9lZ7sCpPYt2O(pSU6Nxd%>6(upDn>PDg-_$>H z4+f)?HP%z>gI}*Nk6sP6etPU0QwFS?d+Z7fTxM?-b#;alx&4(NBO^}W!DdHqfG6Bo z>{bg^HyGhhAOYp{^XE^sJjL?bTHEojA3K8E;&vGVJ-i?qTYGzRM)-Eb;N2nJ&bL_7 zd_A^3U?$Ks5-llfRSB#E!jT<4I505L>eGsR*9R2}7j5Bi2D=cFzwOl#P^CHV3-(#E zHGn@FQP2AV4h;WNalqIO4;OS;mUVEbfcHpGPmhfqgjxV51-^MHfL#=c7SewAt9lZAzYZBPew>c=Un_{8nmLl#*?YgF>X-D2V6cnPRn_Yk=(1JjMk(}V$%PJ~fuBO`FjE+`zn;1>6s3c?!3|fu5 zqM(^q)M_=AB&{36G?soa+VB47E@0z}OG~YuXW6;AuVFLEtNN759i;(>ty?eX#<+G51R zK=+VRFsz=oScx9rbu1$ z7yzoiA1#83i?*hwu+Nn)|9{4cOpZXjAmibpkWVn?x2Y`}%5@VFY3}Ig08`m%d*T4v z!jMn1uN5+?eQwSmynhq8+4_16bzIuKikp|07f`EBr}a$coWb{Akezmz^G?av-iIVhQo_#1aKy!>eXK|9$#MDwXPqq3^yJR>u2~ zz1wht9G*F0VRz%ZzZDn9AF~D`KnT%&ijAe~mD5&#^M((=D{vr3E?(5}%2Pxe>0s`74w|!|z%q)<$ETYt8WVP@5sS8`vmswnOImisM)?J=Tk%A|e zkPu|>&cwv??6C9CpX5)xL`Vmg-gv*FqW7t4XwaP}jsI}p_C8bwm3Dv%Hp$jRIlaU> zh=lOMhcbd2yZ`Qng@r+mGxBD;eC%3+*e@7J*xMg~QA0^dxz8ynjWoyyn`aX%&!|R5 zZpzBWhJ))#^g{P(e}+gzXedFc)*)yYa2@wrE}}9}k5*PzU`6DWf50{*K8Q7q*hQ{3Yw*gN5tPXK0DegOyybL2|?Ck8Y{~;$( zZ^n(O%vR5nvg@Vb-@U6hTvgYx)5B2qhpTKu-*fj*_g!N~oy~BM>ju)YM$LM54Qj%gXX~sK#J-qo}ZuUGuvk zIBj4VK^k^QnyM;A87!}@y-2ZD0hbr%KSX~%3Q*!@JpA(YYo*tPOHijYho{Q-s;W;; z&PZU*0M3r!m7aB+_`sTteJaVW;_A^EkRGG4x39s#B_@qn+1_pfC!z7f9&7@^Jka-> zJ?jFW5I}%x9}BixNl zeoE~UKlhVO68+<;%6G4X_c`%AeO}e(gQurDJxAa9Mh8M*uX%{=g$RMSzKP{$9!dK0DmyrGYO*80FSN(iW#;kboKNOsxUWp`?Mu< zT(n4)9@X+;tZr>>K}Uwo7@!-V7C(Ocs4WDH_O+eeK1?nSu6dq4GpMym*Uj|p93L3) zWkr)MkNyRZ;IcPcQe4amd6O9)F7IJc3Csdkd!WICW;3q*17v`KgK>Sd81`u>z#O_n zmamg=gu~!SGLTMK?}eIL{I^(afj`xjWYqYubP_Mgc(YSvF`5~8rS~B+=lvi&-cJz@ z?pXIjYFermlt>z}j7{+zhP~ud7Z(@IEEuuZB4HO}s6gc_91fh;6}Z7;N^MIAFf#vA zRwiaU+XPD?mtBE@fwA;5z|VT_)3H3=fuanbGIMpU`*?uj)-EIi9Y~o9v-4NFMP___ zz`Id%3k%rBig8fdh~Xa2c%;s_*ubFSP-ualV{B`C4Nev?1lSdDu)JcuWn|a_qN6K{ ziv9|q9gOMhuV2Ny&X>2hJA5p0_E<@vhwQc1REJH?f=+sj54FXC$xHvX(8Xf*%^3& zXdLWD#e}Bt6ZHrhT%Lsc>9&B$uNbiKVe12QD*NNdMX=ye9skXs^$K~g1p5W|GKPr2 zS6l1|1KTV`S9xD53wRp?-rZXQ|5+z?EiH5#`QCG-aXg+w#oZyZ8B{%K*=qM0_hD5=veiQJi`0u?9Ihu%fK1EE%KEbK=FA_s*3IG3 z@z_lBzYmDjd-)OpBSym9qw58W3~5+^vS$}wG$SRjOQCsUC~i60`dVH6HYzGPj2wUs+o_B`r-cL~9W&3|A{}*j>4qY33Cbg^#~~k;V6s6_1@+kT3#5<&YJ{1%dwdwTzs(ah?5k}h)K|$z!CO~p> z1W76A-uzzTbuM{Z`3H7mkk!zg_jO#EZ%>Zx{=~w(6%kA|!9x(EJ zP*G6{L-gwGZrKtp%+C)rez0AD^G4k>Q_um|nFtdNly!%+&KJEO)_jj2hfh4e|H}wA zpfIyw6d4ysWFNJ5bS5e)3UV~bQJ1J)8GWz#)ukOHoE$U<1x`_&_QTb{=Y?nZkvIR-CT@RP# zQc;g?hP8NjP-TitM;&3rU<<=q-7ZftY*y~zC?vAE!6uR%ox&2JCsh0Jz<_4C5qi*D z6c#w5MGW_*1fkJ|Di0zoTi)vm{;dpW#Eg?w)z+t7+eEm0wRvU*ZFSfmyt0G zn%_Rl>Mcx#V4m_-COYp_-X4Tj-~dp7b-otNj&jd42cW{RMF+b<%2W~Dn0R+T>z@E^8>qH!9hVR{KEG0q&piMa?Z5M)FFuE@X*lFz0!}| zL6hN8395r>9UE&8J1K!NMMayiYZ3TtaPjC2T$&bejg^$5V2oN;w&#zIo$|Yh{&RA2 zXGe!(8oxc96j&1DgM*Nzkf6{!U{V zFT-FKwmGu|No9|ms_mD40?m|>lmsmuGP&{K(~sBxgq+r&IISxaGIB#fz=H8G^dwMs zwe@6+&~;#zb0fsT4}=lL&!5uZ0)gH+zqp8}YU#NA_YXX;)xLCW4}w|)Skb(MAxEh} zYer@!eo(k6fDE{r&#Wfw;J#pj@b9pf6Y{`}I~lYgBnKqYT^yX8ommG&B`1 zB8hhyAFQI-&(rj9wq8m&(=NvS#dAJpp22eln*iVzstf+%fQ9quC$vIB2{?OojBNAe zUFC6IV`F0*8yhfL0V*H<{T6Q%(g0`|7?Yk(mD!)9W=P~$?sh&|XKxJc5wPF{HuL~* z9F|vBP5|6QJWic9N5WujNfEqNKIk;CzW}C$3RQ$E2*zj6p3(4_D}(Q7UeT#;2|Oo1 zpE)gKHA&j7!@nb1f{ znF498lgClU1BM9@52p;4(=FILwol?M| zV|+ZR&1*#m)uP1anJV#$Pz(O3iO|b^Hz1(YU`>fa#i5!UF-5tp>GL)E?Ui z6dx28*7ZuNZHon`8X@qc6fwyK+;397eSH~R#=)R`MJ#t8qOI6ZrapbwS5RCmBPFFm zC5vt6mPSfKLZU)Nb*5eH#MB_%2WH(JJW9+2yB2DVGX2qk0e;wD8TI=`$Pqo925StX zSz+)y0AFU8gZj*v1-ff&=#)T}x!{PgJ)oHdwSDj8X~+`D^wL1=*L{VGV{2A^0CRR(b- zD}q>r-$p>^OZ^AvO**#;7-5-P4DSJUy@<�zuBuARc`9J&i}~pZziZ@2l()y!a0J i|Nk%lZ}_z28om4UZVxF}**&;-6gep+$x;bJ|NjFb$^TXW literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/P1RNTN.png b/_site/assets/RNTN/P1RNTN.png new file mode 100755 index 0000000000000000000000000000000000000000..5afdcbf29695b39a413cd73371af65783ac45a8a GIT binary patch literal 10300 zcmch7_dk|z`2Hj7NfI)$XJlrR?a6#(KSp*|RtQDN9?6z1MA;)dGZi9xm$LVWY?1Lf zy+7ao;QM-g?jL#x_jOoiAP`qol;yP%h)YiJbprAd{F`em zKnfq&ZYUKD5{djZt1$!LQn@P_xa&CExO+TswMIO2a(-mZ<7Vk+JpqyG0s- zU__|M%V9iUZ%ld^Utc~)Y_YR;zTlR#dSR>J;$O^tc!{#CfxL!OY;Ej4nnY>TUU#B+ z(?n^~h^$TPakhbiRUsq4BQBFHuE|@^&Ii9gtDktjBK3{E->u)A z1SbHV2l0jzy8-*+3wN4#umOZWsdDbcw>M+aSnz>nOuX1kf-EO4eB27aTx?eJQXv_9 zyus1tfDhTr|0jP~UEA5+y}P|lCGLJ}5+2>3`}XbkzNg2D$;rcXFJ2np$PLh84~S)J z{-}3bj$x4b`t_@b(?rUqOLbKL@Z zc;H8cWocBdEwr01dx3}~3$LBw*?Vdh;`m6TQTM61+ zInGhPuxkucEq>=O$+=EWPgkQu@KjV(I5;>+NJxhK&fxDgGQ<}*H_tj4{FE6aohIrU zo~+wTDIySQ;!oCIv1`;iPj4&ira$xaX!l`*r54dU?dN(1 z24aps__8(Y;HhD$0XiZnr|FMU&Qp@98H0m^1(lfNjb9&ZdeiLK`4IkuSFS{;rHhc0 zk>TLu57R}sGA6E7J=xNsa`CS6h4gGcPY{8XDCB))HWJ8VLxi37=rM;O^qwnv{82HTngMwNcJ^u9c_O23z z(WA08Z;^GK93|g2qZhDx(b)JeUDVksy1%!#rlMl7ohTTw5|xlZcI{dW?4Zf_)KK3G zjVi^+JpXs_XlRat&#^#ujC zL+?}?w8w+!n~v?elX!fjTf7f9(Y@cc4mN(hd`WP0s;8&d*Vnfep{_thMRjy^wDa!{ zZFFO!cW?h&#>7DTo2;xD@#xIiklIx+Lz-7q3_yW8H1R&Z=k&|N#b!gxP|t8es*!fKr+ z4K+0fvhRkOHhXdWGv!T|c>3EQHN(`@)Se(Jesi*shMKzTx1`%rq&EArtr;vi;?2!X z5fPEKE2U5Qg@vyY5+;nVuCA)6@(I8j|7ufZ{q*c;IV(82`ub4c_wV{2kDVMGjCKVH zsRZ6VSsxqnMh1$x%=lNvMN$c}adSs@%tN|4_r1Q;_0i;)t@@(uWGClpdjw_T^Y_io z&2QhnWy=|&rKF_Hwa;~1?o&YarU|RMF|CR!DJgk-dpEq*j4m$b8bKF_IOIZ-=>1FG zwq&t($iJ`a+me7Qe^h(aDKO_o85gmxzC|jX*?xOpFTZ+gluYV_SpET;H+%+ z(FY1aEtdt2#nXw3fA#8BR@R@z#TxHJr-qAx!k1d&luseJk`G;b}qh~?$w9sZe)iHd6P z?b~bcJC~LtE;XzP#Lp$vGBxd+b*VGv-iT5f&CMEhu;q9qqfnmKYQi zl=oij(NCdZO$&>;Qlk$-&v`hZc*!O2{PV}cPKo8D7P2k3A1#vdKA4HisMg2X-QB%L zFZqJlelQ0YD?>Bo^Vrza(IVZJ^D~LN1eY~?e*N->*ca=+S42J*dmT+DVL$uw(XUzs zB)?Dn4cnPfhVbz4_omG#BrXol09|*Bwzjsbn_JCpVKK(ZX|pqq1(5Y7>18=$$Vf<- zj~_pVhK522Od8xdc(kzW3JVMG+_~c@(m2-VQfNdk)&HG4bGmNpFMxqu949M2sB@&=wIQ&WHIW?7P zvci9D_FrxdTd?%=Y7&&X6z{a9YbMdW=WBYj~kHXfI| z#AhQ?&vq8t!mqctw`WLrWb!>gm72Vy?few-4)7Y1LL*CB%J;+*x<Sc`aWN| zrTve#$y16M+W%@u>iV&hhc_ zv9SY4k-B7z^CGC08KB5 zi*)jVs$7|@IJqJ^@$A{NC@Mk4rJGH9KcTSQEw#+l3|o9pCmTK1zrUZZdu++$-sg4j zd%P==iv;!FtYvBUNVin8Shz0M)61*t_J$HgJe8o0tqh+ghfhnMddAw)5?eFFbV+q} zH6(3~?I7xUGMh$(r8%{7uJ|7)4!DORQrKZEvh~5hRo!wtRZ~-X)Cw$SNEDt*!EI3P zGW!K>#*CZ;D2L!;`eZD>Mnc9eEiZpBH4^3Jby^w7Mlyx=5SEz-Gn0|x71{sLc5>pK z8kjhBb#;XZadC2@m6aKhflRn5I9Hp*Z?mx}GRH9n8`e6icyONwG_rTbxOhT)x3l9w zA@gdr{25=YtgL(v!q;KA)#SCmHZU+SGBN_RrSPVR@RyKx4&TcJ{!KS~vlAnU$w=8$ z?%nI@>l5MQi*=R`2SBBvrehjzdgd0a35f(8WvMLZzs&+3dj@G0lKHI}a3`JQ6CGwM zA^Eq`GcIRJKmS=_*`b~xZt~IR=x2=s&SL5zu)-@>u8{2N*{P|k_jPqaf8NpggJOEX zc+%L>+IrU)i@~QQGBT1{*#2ETqElq*?_W=NufnnDCsPpCZM?VItHAY$QN8J+A1W*3 zpI!d2D(v`UpuayRwDU&)@bE6A3O_%;(FZ48eJ%+@akR8B9J)+V@X^w`-Gsn%mx4Fd{sXcmn{O?Z-GzAh=nkidq zNn-py{*phY=$AaR78*@}deLjs*w8RU7tMV{-wxS-va=Ww7pJAENzB7Eih}C4v$KoL zn5`W`0dsM-vT6$^{~bVh^e+TIu@*ASt0$dJwv@~k?d2s-PxqtRPDin~GE@TjLY{dw z14j->8Xi-%-}%|sD4Jo?lqt{C<9%o$Ty!^LQ&W$IHDmyR{kCR4H$L64uvzCl&W~r( zQ6@&!#s2>F;l0W$HlbkLBF)v)u5aJQTl`v5Q^!Fn2u&8(!jC{MAhX-rK44Yb7wa?KGP-i-Z0}us_ih0^HGBR#%ZmFnBsLGBQ zhTFz## z{&dG|YnaY8#pdhyo4-E9bT6{qygBRVG+r6fItySS3(awzG%OwJGOsF zNXUl|hk0u0oKn8+FUdU*!!t|fq!Nv{(CZ=~h1XT)sz^aUf zj_#3_Rki&nr)!KXqj}YMZEdZ9vl4Rq=*0fq^x0a5*w&{2Y-3(getwd3HxOsEL=VqS zj@EcsBQwPK*x2$e7HBqIA1NHJwC;x9M1XV2rw4FN;>jATj-2ygjskFX8NV~`=d}UY z_~^sZnwlEAJFZr}L51Ji0_ML)?QU(|;^1IH=9H8;_N4G9B_(x4)9Luw)ZC+e0g`C0 zWfKy3q(4tF7QLE~nQ8jL$wWg#V=gO6BX|IKi-T`yY%K9fS)oq8c8t=7+ZH`U1XwbQ zQuN;59w8Fxxjh#`tr0yafd2>H>Gt^nuSo+LIXQ=P-Me@1Od6j$ibzOG8lRab>J&Cr zR#pP_F}i<0Zu$$3T$A59J(G|{8!|PBDjt0~P|S179C)o#*=1sQA&^ik;j596|3%(U&h@{#OIgi@Phcl5&tMs%b-I+bJt6 zU)(K*ezHinm_sMe%Gz4j+eSD~+t1JML}vEy_WZ@J2R^ChW8)K%QBux7-1k&5a&~sk zwTbEO>-$OrWMf!9m<$&eS1nVrzPudZJ3%YPmBapf8Ql~gP+6l7WphFBYJgS)>ODTX zOg6gSlPXwWU;p{@XQ0Ll>XL=5*hU^IfS{Q|cJXKA@o9qm{B}RenPNIvkU2$m)!Q*e zfJ}_RP<3Xbg&;2B)%wk~g~zcd*)wA-E$1QBr^oB>6%-VJG5s_PdwG)6`aCohyMb=- z@@;{a>D*&wCjY=dSXr^n#j^nwJag41Pdk`6qP?=ZdZg;TGCMoFr1JcHN6aXF6L48z zsdnNq%)w9{FBU4<=)Kw&FfZL}W4WPp$1APv?~v2q=OPW?vZ798{+_NAt6=)khl7kj zI6+B;mZ+nv>t)hDEmb-?TMtbWAnPw-Ji{3s5P)TBBFACHnV5AM%a$mt^|7lfV2P@R z20NKL{xOsSh~3!XB0I9ab%7vLW)_xjB)LEl%1`pCb8~Z5L=l=y8B7byq$s2cYdo;| zbjiFSX(wbh0LY)IW+NUZ{_B=H93yDE-L+A!aCr+Gn=wG+L=DtgsI_A$#%$J!5tSX= zzsx1(xYUyh{C{j=nuAuV6iv&N ze646N1&Uy6YiqGP*@0J?%^zcUHCxjYsF2{pzJ!E?Scw*&BUeBGsQb}{@PsnraQRCC zvW&ztiY;K4%=r<7pve31t~R+d<);)Bxj<5c1_#f4&On2_dj5Co0R}TNl&gA^R0Xsk zK+)z@^S`axyG=O6q5rPWo?Cdj?*KC(L4nW1q8Pb#a3CTq%ya7&F%pMe{q+q=&+^L8 zhz-rxXnoL0p4%4q`1rUTjz6n8#Kh``zCL(${(bQJOZ<4_)EwYFpV-&QpD_r-nE8Z6STi$#Hc3636e-nS@{MyHG!6(8i33= zv`R);ah~ohUTKc&G;dRKfaeRsyPC#tsR+wYOirF3www#w4t#2FUj)qhi(qHZ$HK<@ z@#9B{aee%hW?^fhR)vA1jg6k6;ql5K3%M7~19-}grz_G6*7UZWXgd{5v9qh2(l z@6EDktxEixn@<6Xte!U1)pd>pT_AW%FqJtquAKlYs};}(WzL9B84Z(4t8Zz-4z{b> z#nym#o_FQ+9~usvw`4NDfUML2y>cqpI3GnL`Z^;cnfpG9{yH(Sxs6Tj<9S8$wMV+& zy1U(6T=;YoK~`qQs0jmfLXpuDh0#7O4XmsbaBURvd3i%YjuVu43J4SEDV;G=h?lyF zY%DA+8yg#wlapGbs|Nt)H*emIG%1s9?z}CY5~M!{C`K*n6v?Gq$F3eh!Lz`S=~h2c zQc?n@MwfervYOge0s_M(&&RT|{@UM5_1{kch=Hw8Wcb5TB)v-j`JeO#aGV`a)nz$d zARNmX5&(k)Rs}OH3k^c;zmQOnBXkRe)LypFfd^^U>XU5YR<1JC1zLAR`i*%K)FEiU~T}+nEYg z+%PN|0A5Ab+!Wp8RWK1)IYN);24^YmWQqGMIUb0)-S&GQNnqu37kHY3X2fKGTry4Qqoy5!&chbL-`t6 z3t6(W(?*^*yujjt#OSnF1x8OH= zXwg-?DG-d{Hh<5IPLSQ`6nZw-cA1hg8OZB5hP!8fKR(?6AGM}DSqw5TE-kHVew!Et ztu^79-Ef{7G8^{_1th#;l>UNiJ?Fi(JPP|~nlH~Jc(C7TZ?NcGI4U-Mub%-?SvkSQ zDCP|Tgh>0H8E^A*>6iH&?-eF7TapSSy4D+8Pj}C6_hrdU*Er~_NzR+2EjrZdkW7*e zjH^@4-juw?E7L80o}OZKkL_^E616pQhg5GCKi(nGybG>tU5Apfu`%@RV#8`G|1`$o z1qf-8VRee64IuQ8Dq@uWQUJo)z|fF$UolR31fmEgCA3XWgFuw^J(b+LC*M0d9&Y@s zv1(mSW`h%({0 z#ituTfv(-(=8vKl1|1Z40K3|WP#?!|5+lLzBfglC0MFL(Hox~xLo5+0!hPV4ToC@X zv(v-rhrMY)p{ij30K8FmpGJ*xcHjGvqhqH!7e(?@glmH1dB_i$PBY;wdO9dU$x)-`T0@Z64{V_wph4oQv3G!xKS4!S%tn zN+?udSXjqcsnNbO&y|S=37@|%HSW!pzr`=tw5<%j1wE~%{vFr9OHxWozr=t^rpVL~ zo_e`IOU7%@wp5{Ai1Ry|1jY2<1;ZU3%|H57ETrVVnrn-TAsvy8br;tFT|(ZBVPQj3 zP^ckC^VdkKyB2 zdzfY^!jK8dYkYL{$%RFd4O&ZFLZaAVk_(1&1J=-cF-dO`oNqGGF?14n=7F4v{I?Q| z_!QB=SyaAf%J@mcq`-zqAw}%*9Eb>F6bPw5kU!^Vr_jeo_AM!3a_3&yd$rV^DJ!uk z35|Q{GZc*w|G4mJ;T}*xPdvQbrDB*PJ>OXfS7znBef#j>AUZsJb-ao_qMd+{5cBUI z5m|d%8)I<8(+ztYn}g*nKXGw!NMJJ)_Ar#O3h?{;7guV2K6@0C-Iuk|VtCz+Hoepi4(mUD7{Dw+ zICG-6_o{Pe30{v7pt(t{<9$87F>uQ7-Ma_WaA{!y$5Pq98eogZsFqt$@H%BYkW2}2 z@y&_)>w4u67{Nc?TN#9^2X0!fn#>Dp%E`&;)A8A@+v7T_i?bB zw=BN{@zIe<|8K~_vcIc*ogf5gsdd#jBs{x^9tP8sF8U`w%Qwf@7semJ_}7-Pgcm@G zI6J?eaw7=&0#2+*db87{xa&g&h4z!9-F&rl#nY3rvNE8LBmHlu)iWi-iGPC2Rinhl z&i(<+esCth<7f2PIr4FK{sU$3jloPyD}Wg6BkXo0tnv6)35+=9nX^4b_H)kfbG-S| z(^I1LRuTLh;1)0g`+0x2a;eQ?JB8np7evkWoPuiwXSnRtG+ZJ>NL9 z$M8Z&CUh~_HQE`=o0~9beiA$Ya89O?m}3nqiMcKy(|gYrlg-{8irSCZ{x>T0TkJ{{ z3c*X6133USYg08Y5R(0%H55X6dU7O4Myhzf+Exn?5vX(MH3r^h89qJ`wWDB{FLcJG zHHyE}ej^uInh2%{?zY6fB?5updT|zjY2>|`%#la~z{Vg6i8-yY@-c)B^jK>0QeTFu zlKdMqh)5)aZ@$sKRp7xVoihM9Bo|ppn+;ZVn97XrPz7r%0BRxfD zR{x4Ou`3ilEx?IKD~l2TTqM{;De{f$sn_za)(0b$t zoD0bWW;~uqJ0kSc@ay}AhETPkn+CG`TVZKs(^B8QePfP8GPM=ZGqEb?9{c}(PS3!g z;mpbk#?{5$htq|B1^k>v&8@8wgvhhc{2R?txMEnQUfaO$HhcZn=VSAkPXV4RFy)<6 zR<6s;HlK_H0eHu4$(En_lh`N)X||@A_kkmiA`V&Mfp0Usd#HQrhGS?*TX-YrfIRoj z399$6h)GD~z-4xZ0_LXR;^&WJL>}p@qS0*kQV_jfHMC@;lY#onr9~QwuC6-?96I1P z=VljAH$A(~v@kRCspq}~45P4#!iK?T2p#|?Pe4GhLWIv0490J@`|1$T_$!pWolW4} zj6!BXVc_B74wBNCBcZzjj4UrKyq57ZUCJ#exaH?}e!t2F7pCo`G_jOqWEbsu_6zPw zY&t7N1h5}Izq1p7mpE*K5iSmn%ZGO?9IpxA&@W=gBck;@-m|MPprxYX5cEppE!HcI zS7rqUpsuXE0=*DMK^P3i*!VTf(_!WU^y^}1^C#KNmn2-?ZK>yT2&hX|PR=!WqPkcy zF|h$U#=pto;om?h_?>KbfR+I@ECe$};9@YFl8|ULYw^`DGaj7i;)8L>CmQOEOMwJ> zxoTXp%RZ;aqhn)wtfa6LXh>!?ui%ED?YBH7h4FxAz{DVAV#1g=d3|nfZh5(5wqO_E zxXPw?XJ@AqHy|JnY;bThw~l;-GppXbnVD;kfEHrR`vJ(<+1VL5cramyN&vvOm&`xA z16C-UE&#K~cq6b2ARiExmxrgOx_WPazqFRbl$DfO=0M_lPl1mWc&Q4`G+3@?f zBj}@1^&Blt7_mX`0lw!0Gj2EfWDBtJ;6>(15PY(|_>xU6M}k082NJvkd#m~`v|ni@ zm|t1j&?{_}V6tX>o8XKXSbfG8<(05Zx8HL4X1G*FyITpjaQ12R+stNICo>CPx2g z8x9TebwF=*Vfet{cM`X0Ko)>0(uZNs>g;R)@y|w&+mjV=Qb!l0J?dgg*#f4Ndq)lu zPJUFGf4&qv052Xq{ncnV={7+ebby@lf~si~+JG=wTQgi$1+gzI69fM32~T-dmD|&&dYqZy6oF>XD2QiH z5z{mS3b7i-;9Fi(qlQLXbb8Pt5F?tPL4f`PYl1p*XHG<6zxSmUuu?WRH&<6zx3#@> z5pI81*fk3=5>%r*a}0Aq;#x&u;3W@_!=9cV2r&Z#LqjVT!k@YxwgNIc6hs(>^M(h(ux z*w0dLIxs3whnp}x8>EYXV+dmu4<0-K50@{98sQ%Ro)NSt_IoL^9*mrU5C8X6+E&T` h{mHZczZDnG5dw<01;nrfH=qO&DhlfIWhnE&{{i*Fy&wPp literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/P2RNTN.png b/_site/assets/RNTN/P2RNTN.png new file mode 100755 index 0000000000000000000000000000000000000000..1f99c784899f07f7aff4c45172aade01af4b46a7 GIT binary patch literal 11538 zcmd6N^;eW{^z8^J11gS!bcY}k2F=hQ(jlFSprkYmjiiczlt_n!fPgeeNq3i&NGnJV zHG=Rxe(t?L+&|#1b!ROH8D`#jpXWJepS|}vuhdoLFB4M}BM^wo3J+v85eU3Y`2R~J z9z2)z(9gmTd}nC|EhG{-J+Jlyex-7e({p)f|J23(v6BVj*>ih43vTBpP8Jr=oh|KM zw(uLp5ePg8Hk(ZDu}>MtJ6|Ek299OO0OH1fh?(0psXEGph;CRs6 zE6m%MfEC%QrItb=;ieGz4^r0t{X{VGzFj#7KjY3@(ZdgQ{r?xgn(UWM{?1iP6X569 zpo8z{s7*{vTv{@JAt~v~co*Nwq>Z1S-^0tx@9+KugM@FE2szw|8yg#2QBjf7JFW~7 zRKifY2!he{^z_ux(ecimJ7S)D=_2IRLau9Cj*cZk_Vjnmf@*7PjTX-#INpbc_YHlJ zU+qhJW@fg#Ufifj7jfx!p%r3GUc4$@Qc{9s+R?0yiqc3E_aP&vK}bez z_r>%q_a_NP_AG1CsfJ1>>Pitv^`?r|R##)DEA7b1$+bddt_TYY>-_g<7QJ=r*1dc8 z#>U1v-cYgZ`f~K)r%xrPq{Q)h-DRh|azJ{lksvj>Vmo{*<1l zWqBmIDgQ2#trR~#F;Q$(<20BqrPU$rWtZiF_`3RaV#2*Sh)7>wA5JevAh z(!Uq&dxGhGf9KDiKXB0ew{O4Qen><_q?U@d+R4@~(6O?z>f(_+FYK{vE+-d!rUlX&iu<-L&B;?u8XAtfOI`5F?rwD5nYW2aYGEOF+JN8A zLYx0Nd~=HQOq68s59`%5zf=8uwTe6L(kd$9$?KYt$0z{<>w$HZqj$cSsp zYgSZJGOYaw(X8O=vT^m=wc)8o4ne{AwV$yDs;a6seqUJRBRaS0cundoH|ik$-mR&Z zJsY@>#2p=V-qflIaKz>>?ksj(VGti38#Au=bQKkEa9nI}3M35sJvuTHBWL-VoR-$e z(C}7WZeE^Mf3mP9T~m5J*QX%}?$oDBmER2;0V; zeEi5EkOT;7TKnaTCLJTfUs25Kz{SZ)Q9(i2=MP`F&S?;zn%B*nH>;I8vlLQ>TlNW$YU}$>I74OuUPjW1j?d20T`_~gpsJvdA;QDRi>BvgWi^MLf&|&z+{EM+0??eCc()NQ z&daHZ+|69Qu$KA*bcyvUfP=XK!xqvN6R^N*WU#9S!N; z5C0}hEg*vw6cn@wkw0@YGtJ-y04#CmmEOHyBSZn;8hnmJF4NG9d%sUi{0EMDe){`a zIXOHAmvzjQ0?Kv znD}^IH@Abuj>w&8c57>EfH^k@2h9Fj90XQPjqunVJgT1lem8gbEr4eyC#Q)b4ni^^mo&T|)O+riR#fzLcYkqRCznFOm0m|madyYi_-kouY6{VkNOv|h zy;##bmG|{+h>VPki0Fp|cZ}#TGB)A1IK>TEI`Y{}ig@pw&1CV3U;?D9K)UH>r zULiC3Q$z_bT6>2|5@o!4*M+1cfV%Th|A zMsif1l;{CB090<~^*K4ZI`KK2Mld4tr{!nZ?l9CM7)zgcL z`XAlqymbpQmos-nVRl6cXAGq_Hu~8dRzP3_{J5HD4CLh8prDAKnVEr8(=U6PBH`ydRcTU#!`9H(6@b|;;GM^pj0hJwPYoSYS1Qy)1w%d`4r;4PZh z{I$~g)O7J)ASUweQeLvLvB_`26+@W(7f7w^l-nDW*1s*}7*Rj;5C8uC8%jd=2a1i` zlBI}vl$6v1-1zTT;o)sQ`LnaLawfolyv|PRLqaZaQGGINZf-^fKutq6Qtzag z{~XNl^zABvIegL;!+urz(uF6#zm+ia&WoP@XbKc{U6YkEmlP+!Yl3`4 zNkcj=%yBiusAJ*`3=9?wvUQ71G_FG|45t=#_9*31uauRQITlOqP>({{jpsFG4vKa7 z&7i`q_Zczq>=bHqsmN0!qhz3Rr4{)XL)%U_~yrKQ6ec6 zP)sCKzV+{a_w>QE(HB@-MMcHdT;tYQ0flzd13X2n3xsH}U@T|fBJqXMVLL${9#uw^ zH*PszC^$AQZnoY_cWCE@09^Ds1w~l=`6?ELY$9+SdWZts%#rhn z@3yhEh2n`hI~BX8l+cZDs=Gijwmwm;q@)Dw&9Ma?*%8QpLA!;0ixu|T)O+W=^v}t% zXG`x{T3eMEI}*1 z3C<9v>}si>^hyF|sL05IesLEnhUD=bt&5rnohZ6(}!5|WZa=TY9-*}1pp0-y$;dvqHR;hXKj z+E{_3hk+`5X_-aOyPzO~(&lL3^ojfzC5GpD>uYLoQx!IbRnOy0DGQm3MIK<&Eg-;2 zZzz8&dC~!Y614w*a)QHh8&)QZ`xrDw3JM4;5DlTEFI>2=`?}PagdPp`>kF(O3J5hN zWikK|Y#^|w;{5zXw8t*6lN+kZ)ld5pfgNkm<xK5tqe%) z)6?UvZhdNcVW%a(Q*2tHVb%9Wzq#32F+oB1zkl|}$HyV!sQIn6K$Jyx)cT$I`uNnB zmE9X_;yh0unOjt3Vq)TZwEdVTJ`A-sUYL}YmX?vhBz!w`V6`AGZ)|)#741>_p=V5#36C?QD?-VD$^DLg^+(-Z1T%W^Dx=Hz|i3#1^$e6+@ zKq$H^7bGsu)p?XHhBgBr9sHhLTv-utTxbPO4prvIUr=l@5Jk5iH-n6`aOEMIrxybT zLoB`pRc>4GIvLqsm*7PLJRhIG@B#$w>&B@@zq8OQ43$Ssz@J%ya#T||A8TRSX`+;P z`1r_}$f&6~>+W%L_qDdlHn$Mp0*;IXQC+oIR#@YAc3N&V(hjs3g?gKk67H^}tIKQJ zFf%sh3e+Y%Jp3*?g%yrD7}ed^r@igLcDPYBw{UA4I9Fl)Y6pW~>(;9`Z)_LU39rzD zSn%2Y@uK|B($W%R(9?lb?hV38+XEm55I7Q#n}YyXOZB+w&R}8pp7v5o_?`eU$QHYC zs*nm%ICbSU2MS5cixDxdiE?gBctTm0DV`ay0$jp zY#~Ra>)NQ*288rP(e=r#g@w$ZjRq$K4!0-KJOygDQ7xjiA?V7b?Y zr_vW{r<#QNpGvV}acn{G?IR^{JUzix&IO_jAj`HI&v;_qYK5>y%K!Hb4odkSK0K+71d_@F z;>A(GPPDJl>yU?)RW?6_rYkNsR`&k=U7YtxV~U9D@ZjL>J9n=74gdfQHZC922tLos z&Fx{imX0w`EdKP#cIPsfI~0=%MHMct>*L(_?vIdQ1>F*c^#behWq)U%swsGpt7kz`~%yVc*)($dipN^6yqn=1&(1W6dGe3Fom zP)(ClUoR&86Dko9UVBqk$;tAR*VHsLGGO=o{`TUms!jeJPkgPst*#2%e*66X{d+h7 zsCD0K-R&X2fdoKm{*wY2%n{(g$MCX65IftSG3d%HC?A+zJBV0tY}|5jJjP!-h13~s zn(Mb68kHWy>!9>U{vIDrJDP(B0t(R1ZUc}zTC~Qf2&&ZfbmoT-khbz~D5W;<2nr5# zcN2q>jEHb^bZnZP7#SI1VP*zVnLd-FIQ0N4CnpEe0PH80<8v6^1gpmFll&eCtHm89?CQWkSJ%$|{w$bkwNE}46tvLTU5Y-Zn7Pq^@XxpaB7V$c z56e+lksO_vU=Vi7>g?xjI5L}W4sQ19?dxmnki`39!_s$lf*q*-Vxp!N*=1fjX}fbs zaN)w~ZV&HnDp6=o=XCL>KT6$1aaO0G(r@V&wN6)Kx)*n3qvy8gTb?|5^6%4Mze+l> zOQAp_1Lw`j$uafCWtXpBMk0T9zE@&r&yCoVX^#=SOe^$xR$dahgF)AwvogU|M6Hls zy9RLSx&1>bnDD$fMPc?f;QEai1y$;AfP&G{*P;%8@Q2?}NzAHtusTm6udzOB7E1v4 z0;&r$zC1N$Fk@Elys8Rs<8uR?y4#lq-~@UIc2>8u{hw3Y6o|w+K4Bn>-UdR zr?)Fst-Za<#>U2ssFztdU{j#q463_my~U9MAM^8>4fGRBGcsNbtXAD|&TME9kLilt zHd?rO@_VujaG#yRRO)PeV`5^$S%?-qq@Y3Ah{D1`z)jwHeQ_D}#Iw;?{Cu!4{#Da@ zr@O$Jf6n2#fR6}rTv{3r*D*FSBIiR7m@6wVSuvrW1&B4E(Zy_Wa2!4oxaB_AotE_bM@$u2qd;g$VZT^NgDJdyQ zXc-(b+LYb$vd3}o@JVi4TU*AvVRTp<jCzL z_FE6^@CO$D+uv@3(}GV((wibmabdSSoTnmxG#9tFW}V#AHsWn$M0-lb1)7a1XyQKj z9jsxpltrG+W5hj6<0j!@VIxCBRN~&%%KdD~D>X)&P}y6RWSh5c^71yiZBUzZBg@YX z*JX8vcgsPXYR~o>t$ZO00Qmwn z&}p*YeVO|Fq;2>QXi3-s0h!8OercBe<`FMngd+mgqQtnnXpHsha|4k^gUY^N*x z#>Wrfns%e1v|^roXtXyJLEviIvek4f1*h&0ehMn6h7!h7CH#~4F)=Y2B4ixb5?1^` zZI9>Vu+NowoGo!Hpu=u)UI#t7p?UpNcjUAnxmT;)A!``gq7ZmJ-l*?S}q$(l1$4}mt?P2s}Yh{I; znwlDgasb{Zo@NTV31SL7rSdzE4uKs5)TTV zGj%mJR64=uUWeLK09M+3Afk%j4Gahnx!be48dCRH(J&D@Cca=(dU}2_yYS*+@`~e` zsJpv6H+SLSpt^Y4z~YX$aP31Kop;5>_Oh}B!pzy#)kk2$F)KN8!~wko`35drIK9}H zMfpHJHBqPmzu?auwH64*LI^oISy_yyXXQ>Ry_hFJ-~)aeAx8VgjrWfvcdZOXBf9xa zwYB4-qF$h!TwU9|C@;0MCvgLdhect)QU#6ev(@XS!M&^{FA%l0v$LQy>_|a-T;t960&5*I z$F#xQu8uw_=U zY*c1@D}6LzVUwEzFMeC6D0ydFT~JiC7KFAKX4}8b44N(4I)v;a=+KtD;^I(pQ|NcH z&WCtdb@bh$Ja5$ikO_g}Ep93=p9_j7BO`;5gx0~;^}7BYh2*3pns90%3NAf|{WWbt z!79jh2wUSCr)-voTdxL(hJFoaUo=UJZ!^1Tn8?e;g@%GvRJ55X7uKm$07~oqdrF>o zQH$=_hyi@0xS$cwA-E`9oSdM1@$}E@>FORr!vg5No}uBu`#UmS>@4_psQ;Y)?yfE> zP;}Lfi`)GxD=R>=D6U^mNJ<(=ePlmEG#U9Twy3GE>4S%zWcbDV3Iz~HcP#d-wW(bXxj>U?){97C8ME9 z0>5!;Y6?nXP;hWK<(=m5-+hO(6#i=oD@)$HNWo)kY58-m@eJ$^IK?p37jN7y_>2ul z3t2QRN!Cp)E+zb%1=wng%kL#*jRKo7CI$=on=uID6J$wYMn+V8JOw>{L}Q;+ z;00kJA?1X-%YY?{!zk5$lZjk425S}^Yqp6lVfl;XcV|FwM(;DfX)}w9kB_A0CuSP` z_U$DK0w1UrX#zMkn3GKibVDO|Lf$+Q|IfgZicK5o?ZGK>5=ySCs{@xWreY6#UMLoz zBcSn7dDKA^@ThF^^+k#Yzn}~>&lO8DK2TOhyKmCsT5~co=2up9jEvq)+#{Wsnfa8i zclMuIYHMpt=51{ym6&^LSs4p{Alox3%)xIe>CTQ0`K%_qGZ(NgyO^RVLtAQ`mS2OY zGB8NYTsY77b>1OW%qwI<`xXZWa6GfyluS%a%=Rr4W3V{61Mt#;LS3F9KrDk57@SK8 zgsu;)Y$TCH%i}_yMjQ_34B0H~oRb;w_pFnq_}}T~l@ks#>&CD$aK=f2!vd^$MaJgw=pVgf3T>^g7y3upzKnwr*y^Kt@t2UvVrciB{T zNzN5&C)AeZm6g)T-&t8%V03ZbHfIcOZ@&+c2~DT&?Oi9y2#|$J{dF zsi0@K)D=^?dK4oZ!03=&pO{FU<-y1c00#yJ2QTlyz`&|g&1-$U`52u~dM^aQO>*#~ zqM)DvVo!ALsx%x8<_VZ>#$Q~1%{4X_7jM_Fr2p$n!VrOfl?3i@wBThMn{^y^Ra8i*&g&3PTlG#xG{g!d@>OZD zZ#61ks4hUu?tMy%v4@8+IB8%OodkccJUYUNr`bP$u3FTj=y|lw^!@g|do45&I=Yq) z{*?ww_yFX{{YgMW%Iii&C&pmB8F2hy53{wm9Tj;mRe+)1C~1FMl74N|J~LTE$5BmiTOEYE&sW@dUiBXA>Fr(!7{xwr7pkjhB{cF>ls_1x!->B{2xa^`rrX)K-=X{q_--*V_wvB(T)8bCKJB(1x? zzP`$KGC;H+Nf2xI ze4K6v;*J^F3{nbeg3ugG(D29zbgc4obD`{5n48Pm1+hBiw84%)qgjzB&>L3vQYOSG66J|5l* zblxln)4{}SZ*Bbq5DvHcS%7X9;-Rn44qQgSk&TIB1~8&w-$3dJKA-uBUME8lyy(#) zAt71%^H`C@m!^DmV*?D*rpB|=+PXR;Z|mD_EVVrbpd$74RZ`J7(1qahfnzB;mT``{ z#p)|G+Gb`>;6y-15lDe;lfRU&o5r+zyfO5l11xc3VqziMe%PDXtO2^4Y;5O?RpdXY zws!AR5N!CbZx97kyKm8tl`!tvFLT!Ds6ek5@|&q6Fru@+pNE@!Y;?4>FaxX_aMR(t zLuretp*wf??%l!0!^;-}hd|$mii%qF?EIXg_5#1h3GeB0u8W^6Vw|a=A1{#5KIpf&s73Vry`Z3Au7jB<(LA4oSu2vduLNn8dH@7tdrcXJJz{zP zNV~<;fwi)fW^HNtmGsekf-}ex9Y|n9!vd`j1m}Vs664o07o2LKy#~E9j7k|8+t19+ zA>9P|_&RSs7<|5QiK7xsNVsYia#>$4E#>*d^f#8W@nr%EkuT2=feW#f_)k%T9g0y$4%!yv2Gi z_8_M=XKDhoPJ97K-8QDOq3RSBMKsQV$Kp_91T_VSZ_mw9aD`9=O^2C0eOLq zU)a)>k!cwocKTlD;dFTgI^M_(xJl>_V1hq`^WB5ytby6J*RYLkVB?xSc>)MA0@2Y% z_+x8pYkj@o!-o%0Tjw1hM}=ugxSs|PC>8)GfixI|*cuvQLS_Qc*VIe`>s_n%w{=+` zzpUm7p9=gphSAs82c3Z&k)n4{3Sja;QANeB=N&gpx6nPnayCy-Pq(+PKuL${9y_W4 zx~cguMdv%{Jfm;s--cldYM;hx>aSq`8d1mcz@!ei>=odyNk}vTLfg)MiTkt~YXmJP z@UmsyNT}`xMUcMy=MxJE5N}<@nWj@hY=c%}7KLMMIN4QFT=?}8G*EN1bW{pR=UG0Z<@%T)4Vf2<)(&K?AFc`Ux;)lJ$r2d6K=x zzZtDen9-uI%CIwe>@e*e_J7S}A*WWRL#9^@U6wkp;!-GzVzA@}aoIw*5n3NvBJU3T`HHQfA1CqXt3T9R z$jzR81u6tW%LK?wSXkJL7foQ!!C)4Q9f$qbf5E) zN43IYq(*3=L=>yI5eO^j++3#S>l_{BgFe^Sul{QeQ&$wR`CZU|VPy?!Gb_|D3;FMYPmuxJ zhlkH@Q$i29ynhEdTt}QF?7RX#!$68Cdt)$+GfA=m-m(gGEO=?<-qVGVVw;%*H3 zd+g1J5dCRrkA> zhw!fhW9m7K7JxZl591FitAwOr=k9Ck>UOlZf2patxiPE;yTS_dp-__7mY1`sh3Tad y31R!SC|#Ohi3lxnm|=vkwD{jI|9|r9_!)jK<}d&=J?{v2hft7Hl`WEf6!1TxugKE? literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/ParseTreeMVRNN.png b/_site/assets/RNTN/ParseTreeMVRNN.png new file mode 100755 index 0000000000000000000000000000000000000000..c2c18daf4896b815463c0dff5351fe1cebd65363 GIT binary patch literal 10982 zcmbulcR1C5_&eNVr|NA_1C*Vo_dR!8g#81QtF5rdeRo^i z2ObV6&oz=JIRt_eaYIGX&?jwW%vT|L=!hKaF|uQGL;ttZs}DI!xdK{OR4HsHqF=Z+ zI*Fe#CyV91FRVfp`b~^aQ!2!U|$a5D6UnHe%-L=Xr)5*!NxQ9!gui9iel<4F++Cj|m5 z=PG9q5hC~ux(R_$pzg3jAhO8*4<0`QBkOT;aozhnSl-sw<~p%CQiEb+WAokVcMv~! z?wqpnSDJG&Jr3gK%SZcL%ZvT_EEjKQ<&n)ZD~u}=ieByS?}t;eY_E)X<1n2Go}5>& zUX_-X77$om{!`{pnVgdH7=HZx4t94P9388w1J9jHPf1BxpJ{JzZGD}btTAeZuz5yl zsOIRnh~`qJIV+{$rT#JhX8L&o`RY)`{KovxU%!5xK7ATy#UbIsI*zinO-xAu znYs3-%f6}&4cdA8Erth6rp(9MnYx(~K3noUbnGiN{qk#iDfvUiBz_^Hk3@%x==2!$x? ziwgUna(>%XTrE>>KhmTt9&atd%i7!94X$2qopOtGB_SaxUw$oSs~He*m~z!Qr&j=x z6TL*_z-i7Q=l}UWO$}<|!K27do4lNyi>4n={Op(UUL7qluW@m8?L3ktqdA+Lo-Vv) zX=Gw@N=Lko@YHprnwkT)$koM#u`c`jckM8AUY?z*YWVK)ZcgG&I$K*?%ZFUT(9zL4 z70R8RozI>)lKWFIr4 zM9eBR)YaAh7VU2?hSFJ!z0TLoe06gkfj(c0pEJ79`ll2Hl=IU@{^~4jEtFqLA;dN?c1y@ zED`y!zJ7kRtfC5k*&Hw(YLy^XJde4oUBQP}^P*)Kyj8KYo;! zlr#h!Y$sp7Pp5R2W=#&#kt5a1g_K`8+?rx`|0mm5xQNd-x84(sFTe5e}`W>p9eR=bWsnwVZ>E^i+l#kX?=*@(SPm2hRiDVz^l_tHRM$T4MN z+;%jgueH~mdiwjnviY2t0pMC2Z}9N;7P)kZk7WakRX`p>>`m`f@xF%jb8$t`oSSyt zf#}GT4`SX_n67npcJ3!xs)6R3iKIvOtQklPVmJKxp1=8bR7gJsO9(5 zD+TOt>L*l*1TO_|Sn)?%q}#@XYdmj7f%UCJ$(4k%q->Evrc%Q zh`yD1vmeuX%d=C+Y&4oqk#@?x)@LsxTRGxeCFXj*g-KBVRj{mNLK zjK7M2kT1Pi2Ax+WX|S470bsf8Uo@*s3vY7X{fZgUiNXY^HnQ)LVRR(Q?QQ3=C}U?AVL4 zDm}kb_lWu!2O{$F_%53fDdYk4>>c8|M3C8CNg|XYc8-qHcRvRo;pgSsV>n_WBNzJf zX;Zri8$adGojrR(mgmx?RqWJPz$gEo3fH_>wO#mmcqXI$c2-qYR4!W9CrdFhGs7G{ z37M1{8W}y?eE|ucr~R7Y>{9Tm# zjusRYym|8`NyKVyett&F7l0Gf%F%KC`t^$!FPiyZ@{|eqo_rbNbz`o3X5>SntQazU$}0BVW8QG%(Q8(u&vY2eAAu zZ*a1~Z-+1wBTr0B-1g!$_nvkz90w#qer|3oYQDi^{$^9)p%3N!++15r%V?dqZnGM_ zl97=S5fKq2SabZ8C|Rh!scEO*03IHiA*3CbL4B?K%nHc%_`JpBn1As`Fnwl!v zbyY7wI>z(qoVn^W@cs3bXX#XAGoA7L^-Y!A@WD`pJ%_ZrO2QM;DY-zr)T6(*nWL&~ zyO^!{lez*n=BU^D+Tg{f__S{WdgF1VXnS;%piS#@CLknGf7aL6eT?4!e!Pdvt*P-= zR#tX$5?t#ARvpOt>*?{?Sshy$sY$wM8nULh0LNxylLJK*c9mzZ zljGZzy}FkO!r?TgYzac~@$oVVj{SMhYT{%NSw{acK(`~d|B)p=`F`>fiWtE}zDH@Z z0JsR7QCeCG3tb5%r!82eLEuPnEFC9qT9T8Kb92A#PL=dt?1O?%dW#hy29Rxw!33;N zx2`oEt*)-JpE|X%zvZlP57EIH^ke|bhsu--h(RhNz@P@+gh&@W4 z^N(?HiWDTl@aSNRmZg}+y{u`@AjRSv8TB^kW(8YCLWnW?!M|siPV7;toUi-$T|t;o zWtMAP64fo%0whd9Z-igubdIRrV{}t6tyBE*|6jP=VTF~)@cVXhhCjI#f+s!V&Fs`7 z6QD|1ZR>fVEk`01a74q0`nTC+hpvPx!5mc{0@K&Ri0B@TN@FfZf9r5wyCM#($aE*jBI=9S5(~Z zii!$AFi6kG8wv?#ZYfecQgs*Lw|ul*VWYp9quO%XY?$3%F4>(71?AT zd;0qB!D2FH{T@wze)8-T|McP0r%&s=*O%{ZjNcO%6)h+!VVdOv(m_T>Ci!3#?KOcJeP>z$?9|!GsbCeE)HORiGBUIC zaQ0hbHw_e%AD=%XIDmUFl@T^KJxNGHU%a4t=0|Y>u{ZJ|FE0-&i;T21=&mZ$xAFYC zd<33D+|Ox9x@A&{peb zI0c%B`?j{T7)*;@h3(i4>4ezW?7X~cr(c3a*KkK+^C(A0foQd_q>ug%a#^0b$XpDy zWx{_W+D4X?F?bmzeBt87Q79;rUqXQijoC%B%lUtVqRD?V4bGBH2}r0)LV6H7PMu^|Dg=Rpo4AjZ{jYEb%WaE*81{o>Nx#5Yrjow|vHd z65S+xyY7)IATexlSc0RAi?*ib@cVnpdU`#1I;k0Q0fIN8psXzPzMTjgJgbww;b2G;;xWCxO9*PR0B z`TNhG8Y9Wn8A2U`@?q{xD&VfFNK$fgD8(B2=2@q_}ot1WP z>F8LfTWi$P(b3V;`Wi|GsLCN9$XM13rwRE4(Kkv2se`{M`*>5M2#g`WK~ieVe?VM@1!Fx+8P>s!h~*^l($W&J;FdtX_}aT)i2XZ2r!E;-gkjb|3t_#V0`taHkgZqf85lS_ zIH2DP_SoIUo3P@unLfzLuva)sxJRf{!59mkkDbj&fiwjZkzh0Xk{cRi<+AO>UhjI= zss8@`8$&22o_-B{Bnd@y<;s=%sg;#zb*;62L; z``qj-fW#KKDX?4nd-IuaxS^q;(q~`h=Gxx66<1exZ9*M(ZRbb&wKoGX6P`2?8%w{0 zZtR~NX26swK$rkaQbnGu9oYq2AT~A@R*6Y(iRl6TS6$Q!H1p%fj~s!!*+T@Nm^0A} z3kzjdP1Uz!YHOv(_0sXYN)gF7Igyt$Zi+nGRCAlQ(JqoM4x6C!1gPdDwXm@8W%iremw2q`*seYL3qrJ~ z$?9`SfMlvE@Vuy2Yf)AJn!-YXYuBzxNId!yLRvpTZeH!OzPlh&}7zC8Wtsqq&BZaO6@vd}@_gY6ZlPP|&LDxLq_#Ia8|IyyQ6Dp875 zKb&a;h2b(@Kh)Nyw)xEnt7Ae8=9@wwJwKl|pMgfXJYknxTzsIdZ4L~Qb26TEb+uYa zmwFdBJ32HPGq)iUCnxWQ^+d6V+%hl#>qIC=@XgohbSlB~xKv)u8k1rqJ)Sv21oMib zaZwY2GvT~{e+I(x&%glkb$?^v;l8kWwbZq1{Cs@XK9=000H*0_X@U54bwyGDgTdep+JRh zY6i_jg2LWm3*X&Jf{Ms+ZOBF9^1XO}-kiq9ad66-o2?i7axqcYk=anjcXxf{<>dnc z0=Uw92L^cZ)LPowj13Kug3sii18ssdRuVMi1H!}1%&ZZHw=KWjy)AzA>QtgLh($rm zWl&}CiKV5bs;X)?4gChG??~b$eBQzSzUFQ|tAiFyNJ>g-Qr-pFi*F26@2O}k77OL> z-Me>#gSWx?nr6TQGY6Wn4f2pH?9s}w*X%bYnRIq0CME_3CBx<^H;9sU8!=e!n03%o zYPP46XV}<+_GVcWTe0e~yEr{kCnu*}HP^;9MpJ8RYfViG-=6>%9v&W$eL~}tU%}sN z{`z&%b04fr|A2tb2yjOuQ4da<*fP94Si8j_xRobFLb29hfb>Cs`S~+uM^9Oqr146$ zs7(j@`}gmlV^7UI0m2#^hc7QqU%8nk^$@sC42OJ5ax$8BacLZ-eL(R_Su+=e)8nWf;=(L=$BA(I3A(I zBA>*>#Knb$?-{b@YFk~>5~;G*El7KN`*G!#y|SjJAm?JHK!R3BXXoNjMLy$C1=Z_E z@Y0f}Pu1?;&8i$GbtP*HGPkp}^;#MzgyJ9)qWg!awShwdcS%T>1MK(u2Q5P1TWwtI~6^4@f#h;YKMp5r*T%(Q_xDAYcPDHa0@- z1d_z?_Nh^^vFp%>+#zd|z7ODU*VorWYMnGK(NP^78iHAQtd7=_)12jO``zjWF!kZX zhw15Q0BEi-I6gVoVR0x-^72iv)A3r<9DgS7e7t!beDz~jI3PgQvSSfgu2rM|W!RDO zaxPueqs>SmZ4aGDH8IBzshgF z3nJ{<4}GwOltyxafeXjIzqj`~HC3|ip?>-2)Y226;^X|bmY5nz^G@^g^WRY8N1)IC z*K<;PiTEFTQNfIEwWAq3a;=c7vS3tRd$``hK7~C8+PD+D~6H*6ev)T&?^#% zdniKUk}Sw>u|^`Ha~2yJcdYZXRH0E;nN{^q#D_r}O@Z^qKXH#Gg~l33pXRYS(F4{2 z;z*hK-$O|F0zT>8KeK@o6tUh2A9xV&pNHZ}1*40O9ROofbd!Q=r}DACPz;Y`xnqQn z{RJZma3HvFX~*vsB*w86oKU@SJbW%VFbQer{|&;5bP^I1Au*S} zB?UkBz`4Tcm>sv zgqT=KNlEJLO=IKV0KQ;dV~-l3+C9Is5<$!HVDw|TV?W;;iLf59^DXY&QI9_JtISIN zO!QN_zF)rpV|8?O=gm98aRj0S{gKwyr+;~Pc%)=xn3+%<8bck>g`!s zoVXv*YtqbU3Ii0H8s6O8OxETrKZ4B!y$jHV6B%VqRqtQ4s9_Gq>mhu-t1 zp4y}kWTt!gy^8eobTB|GVs>C}bCa8ZRDG-lzOl|8u!rM~%+40SW9>eFSiM>-$7$Hm6*C_~l{TMX;MgY!5QUS?4nLT6U(Vp?$sZ3K{Wbw2BFUdR;6 z+$E5pp8?@$Yi~cU6fp?Bj`ntP6eAN8bTT)X^-N4?DJVV#9?H+xLxR@TTh zQgUm^;=VtFgS*gS05@lPd2XQZ_iw&xXjV;#fs^e7(V?NCVQXt^z7-V%VF6JtY*vZ1 zVvhd=8+e74>bR4d2a5gzt!bmb^n_ywCG$lfhRBmAW8GPd0FTg(py%S^80_umZ>Gys z-Ifv(at8ld;@?k&T{rOdM9*s*n9fu6y+H9mLv9GaEi9D2*Bo|C@FJ0m)t5d2@mU?N zqU7Sae)_exosv>WQj!TA$o}>U8y%h7-1pNo5&hPO;EO;P(`Tf$H5VETZ#KVmbve&{ ze_ek7dy!RG=nGc{@OXdyB_h1OEP^dUpKl8i8ZoJAPWl% zKW4}u;SE~ya=X5MCE*f>Hz<NAeC8V#94>*=;>gv$of$I{`T>-uQe0&u; zq}bD^|2K0C+{3AwoBKh@JqD0vC7dHPKbDu5 z+uPdWc65_P?*S`CW`l=EfqL`)J$mrfH{qaNrnmizyu7@-;%6(FUmOt&a0jlCO~~@L z0R4@`wn8rgY(kSTJpg4Ow8g`}`uZr4$YdF>qWt`M;3{BBKzHsmRU7z=fTF8ob^Q3L zSFc{}YK{R%HTzX+$jHJn(`6B2Sqa$H7<4oa##2W}WPKBOc|ep{;Wh)R>5#AfBwzdu z7s^lkbw;3Wuw=Yt$?i~YZ>d?Jxpc2jM(u3=?&)!Zo{!JLbZbz{n#fjfXTsH zk9k<<+F0EVkb#~aJ&9C+BwGl45W-xvCI$vy4Q4Ow6=pD7iTf`C!ab~y03o`)yM7;q zLQUy_Z9zX$Z}6=}VRVpK{sE7~aeTCR^5VWxz47q!YGng;#;^@?-3EwQ%gPVfKfahJ zxaJ9R;Aye%?_5R(^+J;cYsJ(}uyA0)iAdbA?IT{_5=ay`(ScI@4cjb<)ViawIM-nQ478!rwDK8Yob9OJ{t?GsJK!toOmOUvC#s05iA;~ zAAV)?{jBc^iqc29IGyyKkW-IaO;$s`t zT2k?CF9^N$ZD7Jsd0R^{3=}l35XrMG{Mq6JbXqM`xraseRPt|j5*c3oDw*MsxORrB1UEp-T;dT3JMCNOX}L%*Z`Nu(U=qN8o}%^wO^t1 z&gwX6X?0PvNldOl4sXDGgtrByrH97HGuCcj4{U`V>SGXD*0RC!PN*Ye>?z4$^(MMV{M zvB+;1d+*q@JQ@0hZp&AbiQOFP3JS1aHYF)Z{W~6-;yZBNX+5CNQ%07DCkNydU=lY) z<~47cvdNjhe@9)yt!N@DpekF$z%3d>!-Ootz^`9G?Y$QJc&}WEM-llORSZqUhyz;%Gn>#OeF`Q*tZ5)&Nsl86KL305WHBcNaT#Y>D~$O3SrammgnF`HzAFNGByE zU{|O@zx4F>X7@V4#i=-1R3aG!TTHjzHEC(CFld*w)+pH7%|XKkLe5b`u5^S3G+e5K1ZvJdEG{kv$gc6BWq*Z6i$a{ee=ln7 zlOXuu!Gr(o1f^BW&hOt_YSJM2PnwCGzjo~auD_AfvV-r*^C}(FY9j{C1_H=_DE|_v zC1qu(e^|)rkdQXHx*B*yIE2JEH#2*_{T(512)6-q;a17-LIeKs4b$>;X^*z9E-kYv z`ef}qxR3xg?=DS!bvX!K2!nIiee`feQfFVn0RW%mEh~BrHJ2;w<+*EoTPa4=z&n^| zBCbG3L{d_ci*opRX;#+f=H~MvBD%Pu6w?nj&%(o7p)rY2G9}j%qSDndh8O$r! z>f5=739nwIq^5!)WU+MhSQ&9D!&MUKbHrJl4+7|iJ_2g=BUhqj>^Uj7&)|=a2maG} z;F3E%yy2w?6yC>Y*Lxzs#KeSu>?L$h21iE7Nl8yyX`hZn&K(h-?(6Nnp{ZH)?wv)W ze=X0lG;}e*+0M9~b;s=ZIu|(B&=#D&_1YB-*khA+V{?-c^DzXZeh3VQ9n+)tf4%N_Rv47Z%&U A@&Et; literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/RNN.png b/_site/assets/RNTN/RNN.png new file mode 100755 index 0000000000000000000000000000000000000000..46afade16db8bd41e3878f2828609705941b7c17 GIT binary patch literal 11878 zcmeHNXCT$_*H`mY0bfpG)P4Qq_)#Dq=&hS70z=G@4axAJ`wZY*z zxjCQmKZdBh3{S4b3rDdd0&7q~MFfsa2%;{Yb;c!!$B(N*l3odV9Q&Wn)BNi`_FkM` zms(2qxvIqyOA6bXV<#TH4DTZ*LqB^;a=*8^|BM_T^>BmXxiHe_LnQ}OZm_&7=Mxwn^BPntxc8?T$9f+{yfEhRVz9gHIjpt+j-bIDKDviCS5)b#ij@ z^E;WT_b5lB&{Ne;0a#2P@##t^qyKd&ZzT6vb`7Kd!LPZmLByxGq@+ycvwt-$w@1;H zm^J$^_h;^{O;tZ}zI90!UBw{d|M&0T>1rpl+qZ9zR^FXE++OndH$5;gkUvYQ$U+wP z&bF|iV1E11?J;GTf{>69mY9sTaIfhzV@9jT5tzi^w|lm2U%pJiuFiS;_H$FsF1eUO zx>5s4%l62QXvPdt=PcMqQJ0NW$>YX4_cbFElj)z0KGpUUgRMB8YD0sA;x3EI(F{`T zTIs2Yi5&$*570d+B8=il>!~V-?E6s`ri7h2`X!H*q8T(aG|~j0zs=cn_7r-_q%ZR7 z)vM;SQy)*y1GsE>;sRC({-iiHG&Wx2HEOM+whoDw#&U{^9xM-J?>tVggNJzk;lqcR7*a;YjP_^f zg3{&XWe%-$+{>5Aq!=}=e}@R%J<*}b_e5!-I=I$ zg%kQXk-%pvj$H1qv>VIK$@z>%p_n>HIv_}{@fbX@uplQV|5iu+Y#Dbf_NG!9Uofd$ z6LzJLMEc-KR|2n!ib|2)g+nhNpTzk1zmt`Pr#del_$4PN<6XVcKRkT=x4f^Qu+T7Z zh=p`<#d4y|N>NdfnVFeh!u_c!q2Nv?6BWJ-F>zFygoi5YJ2BTkPhO^FUuBT={QZLp z;$2Be3GOu2-rgQEQ$RpqwAi?#E`jR8{@)Y0k55we@H2KDKa_2Jgie9^&4ywzj5gE0(vrI_B|r+!-z#qNvF3)gOEW!t-zo zRZ`-#)SG5Fc9DSp?p-2$dTso(FMbUnrZX73*b|eU-V|_l>b^F~Yg`rS#`_UB!yjw2|8J%*T}myn=OG8erI#X1 zLFRbl60?G-(^)4UKY#2o-QvW=1Uvv_u(>HEbMV2z!G>pp-_hRh#l_F)%TFu4wx5M5 z%r7nN3u}qzFSJEm3pn2FfT%UR#zI0&Obi7|uCsd9zpIX#O+Lo;=lR{1m*Wv#-}n{E zScZO$g2Y(*^JmLblAF8mcGH0c9ALHMj8TR402HwgNlD+Ugw@o#AR<52)zMxvHi=D1 zp`)P4E-qeMU4=#8e_7gwCD!NXT^8D-K7IOx8OnnYImWJ*KuQ#R77L90)0U&dKmz0dm+|F()S0klRzvRl5D~>aD zBP}hKP>CKte*E?8*KWGRzja8AfY;VjImC>JtOHYLEw8jRWXgNec?{-z9d-M%x(Gw` zH3kN0i&g|S3yuC4NhA1gwz)ar%s-)xOu%L^2W?ufr>Dmtj(p?5wvq9;*}u_gRyuIi zQVzj^6gT~4k99=xd*)jpDD! z_B!k!7v|>rvz167obLzucD$vBwJXh>#?bW4iwBzv>ILrx2M4q2zYPTlJg{m$#o8Gf zCcS<87EXTE=(^9rMrK|fM|8(jlb=r-hlRa^13t5+re>sSTpVn|XT6bwl7Qrf=$ATb zlx&Ra2PnlS`?L61;$VP;<6~b)L+6F|k;4&40DNYsb`}rE&N@0el(eZ1j3= zh`9ul&abQ#JnG{Y;14)G9xE}cudN-K-B`o!t_G9#Ixx2^Z2J28BAB7Z*_Ou<1g5t1 zrb`(=e%!mTDJdywF!t?wqpYKyopC^XG~Z+aL4;j$N(u?>-8ZCLt0m24=*zUb$A{b8 z`e@Q=G)r*b($Fh3_W6f~21)6RzJ*QEv6~#hz7WX4F-1i@l@r$2WqZvlW;afc)>Bea z%F(a+z5h78kQGVZyjAuQuXqENDk6M!A22aYfyLC+RGfRo;r>yPBtyw51%HJhq!0}q zo!}>Rekq?l-=#E1gr>x%2qOEPoez6 zmycH7Yq7Aj#AgN&F>iXs;0ll&93$2``>*g=fUh*O zG)b>_qyqlO2YjY=cRm#{SXwSTZuBOT<1?v&Ah5po=+V0eoUFl6ETi3cBk^BwJ0P(#o-F-2pkhOQJF>8vLPd$($Z!%ZFZ_4LFmE`$ntK$F#Y*dH2peR6nGJ^4aOhgx;7=x9jsn-vFj1jbiw zhYrCk$>7xefgg^+ZSQz*e}AmM|IORCRaI5bbdCM)cB-kVjXY~FD&XSddkj_U9h(}< z&6_DlzGRKk|3-`HM)f&@`!W7NfGW+pfBeX@drv|m5ZrXmUTTxXX=!O&LWpl7TpsiT zdr^v_6|!i30}PMvaf2s7URkrq&6`1d5#vQn(P3ew6%~Jn^7`P=;6lY;%-xfakg&6} z!>UudogRBZg@l{fU!M)=jA0(MJl$_T)78^cB`M6$$4ph1R903tHZ}$YwS0MX$!Gsx z&xZ4!Vgxl@MEUc+!80!E2%tX!oAYh`{SWl@_4Raxl)a5N0R-*2AF4rl`8!izotgP- zzAZv4O)_y=++1m&J#}w&H#eb- zJ{h0AA~c%sB>c;lFBKIPfq{YI83{>AwRU5C*y*bJdLKDCxrYy9|F+@c;ZexV&(A*% zP#Xdu9Vs@h1}JQ6Zx)pHYrxyV8ejK=CvT*@45>C|GES&Z9M&a=>6dJ)X5ClJKoELz9R+4mm|E{8{ zx-wHQBGum0vo=|2=k_*MnH2~EP_P^dt}9n>crJFvLgku>$h=9=)YMegjIsgjy2H;8 z^g$H(m%f>wkmGa>L^$I0H&`2Xz6%UDjzde@d*@SGljeP-QQ-qlokjVaLaAFP3IkZu zMf-lW)8SzqOzQW1?YtkSd%A^s10F)$5~yWq($UdTjT@FWHYx~y0RdZVcKcg;I#N<5 zC1O<)moSB`zZ2z9=7L|po@wyX=Aq`^jN`Jpd5}Vi&S4h;ngaz zaB*`-L`D)36Zdp>#x!wKM=UHX0C|c{NXTtne;4~GP?C!)?9{lmz5Q0<9lZ+5s8Jo#rx*Z8=opkR4U zjsi>Uu%V^lEg&dRk6Bq2-qIQc=I3*n`aW!f&ukw0N~)rf$S_v#1QJ3*2{_V{k`kmn zMrPod;;E@(Lv4;)HYF96P!&$>&WCsJ-nC0BaZ-ZbItHl4VzDmqnMj}wkPU8+7dJhD z=Fd-Kiz;E|0mk8vY%HBLAGN2f!ztvXfrtL_zj)FWcNBXIYAmRLfuFvX%oO$lto81{d) zs*JkU^B;f;e(&kgP-OU^V+kaAba)t4o^3xl|3uWQ_;|{!@nBc)_WU!0H<8OUp&`g1 zyi`dq0V%1p)9G;<4u1YpjTGVCu_v$26zVUsv9Ya`UVk0_v1cXV>?mWJ-f*%A>E;Gp z@V93@a%n5^wwbW!7pV6)Sy-0)qJ#>wva&immBmwtjN)^1*&B@>1u=*Pre3HEs)3^A z>r{e9=SpA(Lq&fxNiE#<6wXJZPXULY_au+B@>`Swg@0cl_^Fq7f6T^UqxYqa;Yz4A{I$=!L!W0CWK3xI0xqtu4!@0<~ zxVW3t5n%$oqQvg>{($`;6O=u8?KdJR$jE-Jtf+>*$*j$h{h4LyOey}>%0*1fHPIp*@B{Dy53_Q4tgB7ail-t11$UNz&|%~J+|5R&;>}QMd^pW zrS+eUQ*CXpNnN1qx~-1V(9p~@2L#Nwh5>7}{Vdu7mjGo5HfLppy2r#6Tx8aquGvB( z_$;!zy8168Tcf{ym|*YrQeVK?$-Y`5-`YfZUa6(3n%e%_)Y!D5{k;N|&mwu)Q&=7% z31H8(?CE!q2Kdar(g8q)ElLl*$8jViB&?1WXZMLZ8!0NbC7SuGh|qtWvSftd1V{=d zb#-;cP*|6JDkymL;$tC(iG+wq^2HnuZ)K_F#fuk{E7Nf~-&D2!`0?Z0w{JjE%lMT1 zBZGYR{!ZA97SRYieN$h5l%o;{WD*K!o!hEAK$VnKBm4~fg_VQDtidyO)l$Q>4!OCr z6R*sQ%s>MJRIA>!{@3GzvENQWY&kWb2y=2O%gbknJw5{NcXD!qY3XOe!*Jb{k!fa= z4uJhXd-lxS+#C-5@#iP90QWx|yj(#>KoLT&P3p45*3{QW-w+ev;>w4V;eB|o5GYiV zxSOq;+b&dJDCY=fLTa8nckVD%q7$C#n3$;7Tk#GQL0pEAFgiLqo`Ow)RXus;_NO}u z*!}L>lzx*Fp4hNs7{W#DWV=tMuC5N5)~uN(4xkHmODLVNeN6AX;}|-vxw$zbW41R< z;=Sn6!pcZt^YK>1v?W(=Ztkq#zNu@TZOVrau)Vl|v29v$8TDag%zHdE&Y=lX8FP*$3ElH8c0i%I6i+?VboK};Vq z()n93zFT@*mWMff^(x+B#I@VsT3Ymr3g+etO? z?vFMb#gt51N z7UjtDd6e7ok7zG04rTF{3Y8LVfygas zK20WNp(MZrwhVHmJwi76*99{~cTf<#z70g9rza}>{KXR7Nw&BE`cuZgf&BE+;=6=| z`;@>dup%7$Z3;omet*YHUtPje?J%N~WM^j11iHFgEfoB0U=Z89hGx@byvAqp!reVr zX2|~#U~|Jlh{zR2^UW4OZEUiH6ln=CRNQ*@ z(>0~N{Dy{xm1DXl=a)TFhzg+Uw;g8hG?eyd_j|Ai^(>fFNvQ*?@kTQW(v7}Ry{fue z2J|BMTxqGPH8mfvuZchr7y&BtkL2O8N*LDioc`}_Zf2GTb6&c1X)-E~q1BOXzl)em^ z=DIp>ND$SM2bdpSU5rxR3Y4p{+`liddLOEdsj~)~)Vpiw>sR+8s2%*kI7OjQ!7-3t zi%Uz<^b&fTPN9En2NK4Ub>_uf#BmIoXCIiFYJ^T>cDRM^9T5{avhX#A$Mfxvo4Co< zef-G8%q%NzCL5mayvz?s3#OFdO#DSkR=;och9a3M5yFMAyOH%3;Ju`Bz-7QL=BADS z3$PZ684SWB8I=yXTALx^p}E`VTc{6O$aL`5XTgsFPb%hU z^`kI`$-T?;x(Y|kD9IK?^>D59DKOMkrwsWqeHtj_kxMl>IjkV2YS)EUmX{&we^euz z=S4ch9{DT=$E2pFo*eB1{FR>a4}QMQ$%(+n2f7hCZH7l+`#NDl`V9;YZEsG*=> z1+pYK25?X>7Q?a2Yu4lo@a-~7J~TWWzy7Vcs3@{;iQ+xJi`<)gOa@i<6Tm!DL_l>AZ@IzJEcrbHn~<6CS*^+VJI=GEcogf!$3hvxpTObAyQSz zRB^G^IO`r0DX@PB^bq;@?9IGOFL(Eet`ZaTa&mrvF9V0<{V56zTcfRaJ_mfzi(rX` z4GgNZxx`sjD@@3=PE6F6>Wb!c?5xi+5^8eA?wG%tEGInE24VtLUPE1dMFwS2lDdr|DLgVgcH9026;yM+GkN?ot5>b zO&kWwrRCX}N_2if!Pe$x>{0za=dH?CxG8PzAwa;wUTcB{rjq>pTK6?tW>(;}doJ(u zv@$^3r^Ur-8yH-#Ux`=e^4LN!Q&Us#gn|O=JYn9kAIw&Yw(3d{@>2&dO^n|yNggD; zY{2qpamwHII+x!B4m=>cHB?n?u}{SNbxlp_>FBuqV*8eSyu1L`Pxq#rG*W8;??mj! z??CrMv*_pq9A{9iyu8Igl{mM=Kq&0^SIzf8J+gy>q`@eUz$Yfw{bz9VZlW(QKMzk4 zyt1;s_4R2f<^~E}-O9$NY=Xg`rC$Oi-6rLJNBqYS8(u| z0etghI9=K^PolxN9b+WSckNCVvU^I?b$LSN=)$0#fZ_Pm)KDpfsp$+f8S2AsqC`5u zY+z(yNRNqmpr_})GR%dC7YaL_ZtxnKo^B}Jm-gIv0(2=w*kKYn9Mg@9qY5R_#l^)b z2Wkmqrot!O z+a825@^HT$2_#V~LmHSv{E_iT*{#c7V`$U!PEMWS669fusDPMQEu5djR27I~cV}mO z0s;a;!quORM?jK8Nf>{&(jEGq2IEusV=)|cFU6R_Ke*qSkrgFJA zD6XiesOF=8^|{gD)GxMLhfZemmKyfNkP29~6M?9Gb&vGig@_X95bea^%z=ljfWW}{ zD=6Sjc^h4Aj2i>FJ&>)$eQE;{MsVd2S_RZR2K_ty8NOR#V7qeESxw8~?RZM2BVQ=q z2Zs+{5Oj9lmS&Sh=VVcTFM(eWOquDKnVUCn${fsH!bE^GUjO1(;H}$D3PHZ@ku>1} zlv0_5xPg;u{UU0>G6)F>lACtCz?ck=h#0XcimkU+i?<}XS2z6&Xp+uTG)tE&NI0R! zr99!)#^HyirY9S}zIzo?zSz>CkHFM-z#8S-?e*~G*I%0KltCR0|TY*9!g}D^;bcqsUE6Z(*xND zN3c`qqY`2Q1T6&#EvcO2&j!QCkI~uLg4X>^hSCpM$;BijhAMy9m{*SYkD5@$CngTP z4zn%~kBV}bZE7q_f1s-xeWNDuXFRh@r9!E}4SM<$0N`7->|Uk#TaY1O(t`~s*9q3( z-umn;P*#Y@(RnRcNGu)D5vH4^#VO>wH6Y6B?NX zEQ#Jb&RlYDZnOx3QE6|HISpEposEr{L2?|fuC7iTlx}WrJNzBP1dYA|LAiqrkXxD= z(rfGM%}>9BVe<~2=a$tTCn`JJ(%d{YDQRM7$j|Bhc^?T6?-~_V6urb=ic=HFRUkfO zWMt5u%FoZ|=jS)9bwSNX4-3c|TR>yb2LyMKNR$7`WN)t~mNZ#421jI|J_m?U5n-(M zix+p8S^4-hYZ;9V3`C#z-neur2>PP&@87Giz60t0<_&^_gQL`-XfAZ9*YWAo92Ukc zKzRzeyFx-XULPMVW4D1K%4Np5@os$_x(VLw`z@Qp!$U^j4OwUBO|YF2%uvPqzJI3y zN)N42Zfcrb*bb(#C7Y!{(@tmA+AzA z7zm~iSg2_-0n+^Z&I4JB$jvek9dF*efhN%q7h(6n1t|%M`e(%i_{BlO=OYA|FW;x^ z;o1HD{h`J6aVF#v-c@x#o1t6{u#0JCM4Z~fsUR+a-s$S<7DYIugPaE`!ZP;d1A$zO zE0xqj19(HA6QI9=51br!1CT~SLQ>MG-5E~B4TVq7&`?E7s(XM*KYAz>>kK;0)d@T} zEogtRlZU}YYOH%wQ&N)3K@l89n;QEJh(p;SB#bz>xI`}i{2L=_e5qI)Wzh`0WYEok zE*a8^E!FuG_&9(d5X#BYA6SKu%MTz{fI#GbOPd2uA?dRhdAFjT^HU&L{r$Xt14W=z zi;9aes^rMKU8AE_r;)hWgL`Se&;at#ag`7kZ*Oa>)NAaCd-qNbVe`W09v?j7`&X~N zel~AG^ol|+eQ;<95Kse=-#pX+{wI(s(0+!)x?^B6<1+)ha6=2jJu_&n2Bue5IMFZ{0(xXf z^O{cfP-YQk#Q9txeZzzxS!Dt*zbpiVK$lZCgBO@h$gKL%G=<+w?4K@IOgLch4;^ zfBJDGC+)Qryth64XD2h^3JJqB6i2yEI2gmREb^_bEtmoEl6-uk7(hWzzD!bZV``UP z%mrLs;GaM=65M!yykc6qLq|s^UI*dQTSuMX;^MO0nI7)Zfx*aYX=!m8yU7y50L%H9 z0fHF{mZfQg>c<&pW8*38YDPbkh>%bkIDf0x*hR$e2%CSyr7|>@b4Iv0W=A*Pzkiu^{AJkL!DeUK#SSpDx%?4_mC zNd+Kbsb*KJs(_v>Ek8C!|u~zv71HeGH?OpU1|Dw;*3-ytZ1Q&>Py#RA2R$aH@Y}65ukXp3f;#YNsplkpX!!QFx-}$#$dzcs-9|1?M zG&5`Tb_1hKSs79N@9k`z8@G^9B^aV0A0e~ZamAJzfsa+}ZwPz*005)9e*FM^Ee2`d z4~`Gdz*2x&7Mr2m0b&O1-dfk^hVk)npz0HolLe#q3HwE%Ee|Z>Cw9QiJIMFnbS)qa zv{smcb#!z*y}jWVt}s0TBP`ENpKKQymWPneudWs{L`SQ^q{r7E@41PIh^{h9Pmhmp zO+7im?m~gA0U{O=(FH;^Pa|ay-k^un-r8E>G@EX2&Jt@4o?S30EzuV$2oGqCd2h^p zojSfjyA}(baxDCJ;8Qdb?tg)c!ng`du*@wiwA9u8fgTZLtxh7JDbPjey0Tf8M{#i` zLjMoCKU6nvG}w+5nAW?O@`WaWckyZNp0$q7@VR^7Jg*=kA_86JZTs=hUq?pVV0Z>V zrl3JVmM~NMe9{XO*bkP8m1(S;wTCY-Y5S7D#uvR8ZCUQz=^q?)0VjK0UJ(UEqr{~4 zk(pTl&{3FOc}6GYy~B~=>jLRg?fNq9ZQ<0^lt?2=iX#<785|rRA0IRU{%0yCxT)zB zpaW=JNy%-PraA)#{-dw&{B8k#Ve(}QK}aYlBxDM%gt6yr_}jw5ucuF-vV++O?w{`1 z1vR4L!f3La_jyce?(p!~!>&^6{ZI+#V-yZk!T=ti3!v@4_|(Fp4ElN=9v)_9voJ|l z&X=b}#r%#+vP3peB z@`lE2t=p=eg@uKqBOf#C?c3|XXn_I!9nK#LXae7$8kPlJP%^U*V_HzW?Y@SmAYn3X zZf*{rf?Zj}Ra*w(Qfcwz$qR5u_GbeEfYw9*8`$0m7}iP&FX35yAijW-b6+|HSIOF1?UBPy9-L{{V3YDI_d|3 zpgzyFEwJ3AJLcb@yE)=C>3TlMmeqytY6@%&(i-=oESv&JS{U969Nr`N?I7&6wFs6Z zjJ+X9_ynCnMS>aF4I@>+3t$Kg1p{|WARo)i%V8exwH))R%Um;z;{g^zvk%y385(7k zsf1342{;37?S=JqCv)@69}#3Bp`mKkcM~RSoMC#4InFz0YkM2y?2-8GA6wRDMn;n` zy$7>!#prFz*=In`VOkLA@D}G&vkpmQk zv%_{`I^ovV)*BM;-QZ*>-qPkGe_#f~DdOVd0O?Onl`lbWn(inCx+65JE3A-WQSYzO z&`gZiLs#5zX#SNJ;P- zrX?ZC0JPzboq8V8%!4&Hq0J}?uzG3(N12uup(6|SS{jA%EVxIN71yJwbaiy#TVSAc z|EEvi_;}4%vqs=Q=WVK4^))F!EKt+o_Vy4rW;d{XWFq4$Q!QU5a{{N3^Yhl~H0+3GXA#+r)~B5haO;Fxe!6g3sla!&&P3*^8= AfdBvi literal 0 HcmV?d00001 diff --git a/_site/assets/RNTN/RNNModels.png b/_site/assets/RNTN/RNNModels.png new file mode 100755 index 0000000000000000000000000000000000000000..c2b00f52a34aacb7372d7b057886f683ed141f2c GIT binary patch literal 43854 zcmdq}hdb8&A3ly>Mraz9WRqkSNwNt=c2Y>TP+66|XI4mxBrBw>s8F^j70S*YDJz@o z@jb8m{rw!j&vAVJgWq+$kN4eu$9274&(~v|kMleqSE!bT(%xN+y9flr-V4f#+5`e= zCxJlno{ALzWj2@cG5$ezSN?(y6%|#_kmex%^|0ePJ;zJ-mX0o_cdiqx?Cfo?AHQpU z=lXTKyVmxOGh~%A1OhwZg5p^n*JqP`E_#pIdb7<65<~Q>vn`v5&u9%_WvjUJR-m>+O_*MjA9Nc_wJo{aylLJl}qCJ)?PYVLZIRK^XGMR+7?EC+z`3iqnnY= zH8pkY{rmT0Yga6*?}yctxw* zd^eu`IGxj=@;On`WxDr+xus>%_4?Se0Ti?>rwIf{5-v^x)e-vegY@+Dw6q-ZztmXN zoLXLFDHoM&%eOo_`+|(;E)Ac}AlJyrhpnx;dDjZ={NX!v=#ZH|1;IBjK#`n~T4PqQ zG~Ks8)_lZkNAzWGx|`aGY9!4MxR0Nrd)rDUy;-!{Szx|n#}1}5c9RKOB!rY;lh=m{ zfxDA}I&Xwc=TAuOuHGAaWQOUOX%-*j!GlhdodrgF=?Db!9mF$G)zf2$4caghI6M98 zCc{PHHK%uCvEjWR?q<4>B4^I};}SR?K7Q=9I{$|wjI%!CD9wb%QR$1qS7lYMsh?l` z{Z&>)Rdoj|2O^H1c(c)igC_Xf8v+^=SLpp|(^8DjYu^xAn;Wj%!N5cyEK(>d5@_h{ z`^&z5{d&8Fe#pm2S!vIKud!t3l4N}%FB>89Qi;j=k&}-vOpR0AI&@chacK8*0eXIJ z?(^*BBm_A@KjOJmSqLY~c->5wzZgrSR=$)Pc3`i5^zYx4q~m)D1oxf9YYEa!mMMJq zj)6~`yff2}^Fw<2yR0k~UoTREugO_XQeUl=jnz}~%1TBLLcf@Z?~>;^b4K_7`rDT; zU&>MOhVGu1y>RX}o&3u;Z?ylfzqPirJ5P0ShdBGZvHczjg$IdvSGj7VQv|9Vw{PDL zq)GgGHsq&8d$!>2yi9rr*`o;;~~PCVcL9&t)e&V8D!7kAq- z+*p<}o}c`)u-$v}FqdF+bM-+H(f(67+hfFI+qJf~MwR-baFX_0ncWXp^^=QF&AwPA zyPdWvvmy{^G=!9JE-9Ir$2d5m^{an-F@&~#-Lz78A4IAqV*?Mhxl$JO{!e zBGfI9X1{y2UfST`Vf=ON%8zG4rv7s7WCUL$QsNUZTbk@*-ZwfONAI^7`=l;Ki|70I z@Bi2Fh7Z*QM=_k_eC4aJ^<&yp*S=nVJVQ-VQu5%zgMPIX1VTAEF~mZGgDHa*_jxH; z-7cvdRx{r{Iw^TCC#PZU-{1c$Bsfb-N>F-M3v*`7M!vTAZMfs_q~!T?M_L;jZ4uh#%-nc)3S!W((1re< zn##@0bXi-xQE4S|%PBfk;(7NQmKR6#FXz+>9v7@E{PIQFexTy^ui8+r!^aWSH^gHAD9H9pPA%IaRGB_+sF z6T>&ZwWFi|{q5n;5lpHoDl$G>*REajoa#owK7-Qj<>jTvNJbd@X|j5PuzBz4)2D@n zP6`SNZf@d1@&)hS1qzT8ng;Mbay~6hO%B7K_XcXYx{8H=;*?+}5XN{BaATTw{bgO9 zogp-Qyu8&nBhVBoDG(W(=M@xc{{8(q7ko(2$U-=t{4*JWAWcd98wwd_f3%#?sHkH? ztaux7;@hyzc5(E6tf?{8!biR)e&jaI;lo``O-&PDTUxZA@RH&(*=b2q*sN`AekZYf z+-n&|AgrAx{w`%V%f9MIv9X^&e_pyRh4|$mp3@{F6Vq$M($B*Lh65yoMdSbeo9mun zlW*U?ZEY?O$b?g(!ch^!@2a`EdCwH#l-t4t8Y25A3>*k4ZRVe@{Sh<=}w> zfk8p0wk|F%^5)}Cs<8sqjF}pVSvfgeq73ZsaL@$biv4tS{*5DtPg&kCEIi#jw3v6J z^~GKmi31#MO--|PY(6wB`)IUrtWAPIPtO$yhUe3ox3<>pb2jS*D`}Y2*w|yfuDtxw z_13D(Ml^MZ!0{iVT8zH+Gq1g^jXe-hV5A{l!BNc1wAz1%-Vs?@S%gKWiLV1%im&%Q zMNV24(yk*=j63#8FuA*2}7-Mf5Bgz-4QlYHu)!@_}TTQdfGW4@|g zxZt)rf8E?X?f(7yK|!DJBwy+96{POEa^uF0J9mI)uiw4<_WASY6%{henuH@qj?`?v zx*^hycq$ORaxd?aaizD{zlqBl8XEXH6n%ex|H8sT#F?h>MT&svgoJRKmY+Wl9X$9c z$>MF-Yol`axnJkh)g#i=)8pggKYaKQ7^qnMqh(j9BBQMLMuut}jr`o)oQ-H{`;T24 zgT$0crcILl#jl%Jn|DGflDVtQ?NmY|O2I~ww8ySpyWUe@($yWP^rJJ%S)NH#3_J7Bng|5n|BuXn1l`s$UPWIsf}>eVm}x_3MRR?a0xdp3)h9mEi}y#?k<&d#FCrYb5b6%`c;qPBExvZA|p@19InId|^U z`tnS~{rluC$5&*i>WxJRa&c5OAykPPb&SZ1q9W&x9R13TdDUzIat@tq*UCIs?+6MC znudHhCa<7yOzlaPCcQt+xBB{F?-hWbGHLh4vB-X>&9#5k0hBY{h1q@hy+@B97us~a znd&KCICSUM%Iln0uR=941(EAp8>_i*XacpaU;opQYtYx%7q4cytkD)06tts%Wa}aj z0K%}M)|-CFAwmm zjNboETg!$bS`pF;hg zChN3W%G6BGFDfaKh}xeh;iRIW@x7rTXG)ce?gpOEHcIc!n|$(^dqS&C79G@X;en%n z^~y@0JX!GI!Gp50GJ|3}j=-l037XHu{M7l!chZHXBONl>`&hq{4e%42>2Va5^`$BP zC8WCU{IP}Y&r5hv3|6=)0$E~L_W14NxF231V`-d?%wJD2zOkl zqKFKhi_cp(#QgIf0ov5oapMZlp^87vmzS27nwy)aUc5!gsH39| z)UvU$`LUliuqowiK)&K{R3icU@a20ggTeoWEhp+MB1?_ie0WJJ(xO05(72+Zp+P`U zFjI>sfMPG>>6?#IQ%#FC4Yjo&qQ>SXwYRo137HV00#K0f!l=Q@%v^?Eg}CYL>T>#b z|IM2>rlzJWhF7^Yp1sV;xpwiQ9}Te>+g}SjaPVN<2}8b4nGfwhf12+-d^p~@?xHf+ z!#8h)V?%Uwb&K8q^%UE?CM<_2XU_UFQ>7yRk{pSZDw3l8enbT7a_C;t>z0UbKLhZBfe|hMytEV?MJbZ+cla7Jm%ev$(H8nLrG3V$JQOB_+04;>6%Xk?& z7GkjKM6ehcS-_6n^s`%8($15e!*x;iU2m>$Z*8LB8%n%*JJ81mZmCXa2UUa(l}I*)<~&SZy-dSGqjjntAi)&A&fi zvjI^chGV`q)Yk)*mUyib`?YS4UV(Kpr?#DMMoXyDpk zzsxRPBw{d*9B)yxTB@9C6QSw;a!IF&wkwpLE3+g&w!8Ccap2WG8&MP|;~S#bWGyW$ zbS__RwiXeXQr6U*Ils|YwaJS%CLytB-FnK>GS@+~`IEg4i7$Gr+~BWYzmTiT3qNFs z!@1yTcdns@Dk zwwl@=AHa|s|0ddxX(p9d!@&2(~b(7Js2`0?ZCsHQp#YP!3hmY$bP)G#h} zKHdSoAzs!;CMs_w{>wayE@}j_YgpB7oprh4`n2)kyn=$;s4PlRtbZKD05!V1yUQeh zZw(Hdy&=giM?};nC@?@dc6N4Bk+_5JsMi?AM#jgxrb)80vd*48%Oq+e<}~pLza1M( zAJrepCYux!^VY7f#97b}xSExf6=#aPN=;40j}A7sFOIij_ILZ+aU4^Mcg4tj=z5Qk zQBF+FP-kandPW8Zk@Cq%Pe;`Ni-U0`S;}o5?SD&m4x^E@&sJG>%;mZWrn=Fwa1?kG zlfj2aRcYkWdeNVZ3e19XbB_nfug(tDG^Z)eG4f%gv=#i?Yc<5maw9u04_D@-N2_gC zxXxd5l$#s3eEJ9bP)rvLNjd&`GzySOZ)xPUegzdJIUbWF@+X%fI6 zH&EN(pHydR_;K?wW94{_i4Z#Dtk13G@gDT24~&B@qT}9xu_AA?dc~xr(Hu?+FolGL&Ckvbo*GFS+?*YnM*hab!B@Grf0Q06-aH#cs?5seckGr3+|R3uvQh|E~#Th#pgyq#jZ zbEJckwzjtS_NGhVgZuXz?jJtY%?bjf3`qqF?cKU0i`sI#{xTGkI(2HlQ_;fV?=3CDI3d7;w8O_<8&`JV)!sh!jvwt1(CX)F z;{_QE02cty^8cX2c!r)vfClenx+go}+;k40p{;Fkbz$`JNUf^8h;$VePCBmqos>GL#X0}cEm33m~o_EGkaKaGFyMr z>09(Od1HvQ*EdSNOD#^T4M9@Tv~cM`S#(ZbVh-d6V)M91b?n$$8;}U zsws8lS-QRqJ~-`s#Kb#=$t%)PGj)7?d_aVr_jGl1YJW_vc6VQ${_|=BNuFO+RN=kp zp8Xo|#29~)=s@64zROy}n1-%IBkxJCHO>TR)7jyLGz7l|HUTXf(T`WBJDa%p2N1;( zY_b(;1CLOQ(H%Oxm!}O&)0FW0ohizurbBP7e&polelWJZabs?AQOaeSc}6So*|YrW zJ%~Si?Cgru84;1ro}S>~;9|b3$qybl%>TZSqn@foJ|A9nD~%(t;u%lh#P3c5HtZv2Te_TE~xvbCK;Vgpkrir=M>H5sVzDmygU2Aot{f7QT1 zGhR3&n`AMms;bKBN0R9U)9crdMy>K-bnx@@vt8H&U{jX(^l7&p=Npr1f8@W(RI>Nl zI1oPZ+|1&^%ecF`&Yn6(mf#mi1B5}G2NPXHMMOa72^f{R0`gN(P++7V8X7`*`lKXS zWYZ;l=l2&4&2HZ5F8~D|{TwSvRk?==z)_0?0$Dcfd22d$NY((?_JJ(BI6}7Qv(zt&a4r4LKczGsciu=pOI0*d&3p)efRF& zX#tDR_YZ@48R`3h=7=}|SLO!lFg439kartg_xVw{9`9vi487@$>U< zOs&wn=L%eX_oc1vz^y|E4v0xeV9ttQI4OVma{Kr1`$}UmP(^JUUPaAuD+?PCE_ECS zhS`~!N*UTfrr_m3Z^XpLcHGbfZmZ4H>?yJx9&Pi&Z(l%7UQgf8 ztp*9fZdr&vyzoXV_l9P6Wu;Fx*~aGPobKB6e`H_{=@+IxWLmK`4`<--(EoaMqPi{`v zW`m0tn`edv`M5a!cf?#Kp%KFM+1uMwQ&Zd9&j5=#{4}w?4%!|~6=K^3W#y`l95af2 z%A-%(N1aFzlX{d*pvOgrc&WUO?>bh+0o$dmsssr2`it!WdEaE_7_n zx=c;A)JY42N#s#ZAhpJC--Ja(4wyW{Y+HRr1~+EY`~IS@^fL5c#)>mZCdG~v1AB>- z!$Gu$3@siMa#Vy(oG8j|y?GpsaQyu#u9k%b6_uL-Y#0BJ7eJ-Ev(rJ4{txIuQEu)L z@MIWnz-~^~Pq3i1BnV$uF*Ho9sMrcS$UnP!Ukng>@7@TR@XRACJj!TAvBwRFxOlpv z7Ei^n(p$O%^u1BO$=M$BYlenEd)Zl8H1a?540{T1#ijV?WM$z9$mNxUI^eLhsg9jRW8S>?m@lV>|cu-MimL<)FuF zK7Pz!DslR2QohIf5Rnt(R9A1@e@vqcwFVsy)0ghD!Z-!6^QB9z$N~d>{ePRk_n;C2 z(6bi)4oaVY3@P2!bp?}GKED?zPi19gCCd75-;!j!O8SOq?q_C>q96b1Z?xbLuVZFRMu#~%%Q1^^Ba3li?_A3yr951}aJU%RBQf0*qAY05!j*f@c-zzeFc zpy2ED^uY6JPOpged-iHVWlzZn=9?xSB`y7ZPwJn;Sd6(q&t;^K;cN;A{> zNapw_Pn6rbN^3uTI(q8VU9cm#qt3^L$DN%1fjhD3FVi0GmtYSxY&K^PMl85m}hX#xjZEQW@MBdi4f4l3cq#V`F0>;o$~)dV~Lt3&EU#qzAZr z1{wgmW=-Z%gyCDhIh+yN#4`{FfvD8f4ti|nl7 zpGsWket`gTwc;4trVcg%8!V}!2%4bdwef4n1YdaIyEsHO{Cyu`!y3|VX{iVD19Y{x z+q?yU(}mcjh^EQW&zA3GPf zswykHZmv1a?AQsf6*;(ic_kDKB#R&jW- z{aIun%F&7U?%gA!+?TH>U~WEqWj)s+DkNpQTyl7DPzVeR##2;7#)y%rDd<3bXGq8h za>c}xZQ8kO7vJ&Yk7HtbP^(eCP=E_Y|NQw=aP*WI{){LIeDLg9FR@6`JdBRUkGC{8 z->|V+oS#Q@j<2skUGyOirq@8WJce>dGrzQSMO*v177y~JxwTcncDmjq|7oa}=^ zC-6adc=+ST2baM-2T)mIZkvyc`o27Df!>GPk?8Zx^7GDl1co zI&j)RLPk+TW+%kNv=)gabXWNe=j38c$xTf?tL#efl_B8VKl)djtgNm!k2^|Bm*bM} zRHi?F{>FK#8;MXfgC=;b#PLSY=w%%p|A2rqw%wSpM}~%OFOFSAq3PdM{?}Th8&%YP z;DbVDvz|`8O1$Ym?SL^IH*Wp=4W?m*BFe2{2{$WZd&>?VJP3}s$h3w6$NW5vf?UgU~c_<4hLCm*T+al_dF|$Z{>aIGz1wE z5=vE-igwi-^wv?Y9p%ULG;y?KXHI%#fBHiiAI4)gxL@P z2{MZ+)#-1C(_*`^zW#^ozI}`$JYr&cXV0d-d)HP|lh*nz*NdHl!xRAT=;hO0PfxWj z`tZiEbs5pmCrgB$WfL*f{ulc*oM!_<5c&i3O2M=pghf$eHW`6iNmU4@k%zo~%CdpR zW)zk&)JFhlP#e%Na;22Z0no$j$w=`2zG@N zq*cpAiwV`|A%}g;Vp;|UXzvrCCz&#v+S~W;+4Jb(Lp~m!*~P`*1%>vnq2@@(2BnXe zG{2fRbQetEdYJLY{|_gg_*4cJ3Bik+_*TwvgCN#WP0&wKbd4nHA8;s;l98b{FU-$N zNlAg|bstw`0UqBc=KA0qomd{g64!ogssl~pJVQ|_YSEq^JiPDLPfIVizSEH=P4Vz zY~FlHt&$`8OhI6$K|s*FC@5(D@q?QG&-BPh2c?yWu&{)~AmOnJ%Kp;a9H4@igEs6( z8c~svHy$zT$Z*MQ22_tto$9ajNj}1-&nK(RUVc(Bz20Ik7`3kD zQ--K-MCS-OkAQ%b^CW}(m#(f4?OP>YV6)xpB3TaXI(q!LEqH1*?zhX|KbaN0dGj{= z=;=-_zsp)Vu^rdncA-l6PtXwL4iIs}Ej8}eA3vhbr;Sfc*mdUny%a)~#*Lg$)q)~k z1cjl=;r)>S|tPiz8IAZI)+g6Fqo|3)x0-W=KRPg-Ct=jT>1X^M0bfpNc@ZY6=CGtn`$Cj zS`Rkt)pqbJaPA2*BOyqWdo&W7GI9R-`mxBH*7o+J63imkza5|mxD7EgJ)H|_$HvCy zXFuYf+h}@mOZCIc11*cRDpaW)d~`ar+AN9fwI-jeF1su)kqAE@cKkyl|E=Y@hQh%; z6w37NqLmvTKYLIR7BRLFnp{B7?_-tvV)+(a`I%cC0?%t!f55^>S$a=JU#m6kkJ4>#~)@HAJ*trk-rw z&D1i_M=NhzOhHaql>G05z^Y|sXXC*`fDDQJvkN%Jv6D z%}u5}$P^S%Tl^fU{+Hj2C74qoIf8j!U0c)UIq7~ydxafkybRxBKgi07Ras)xkI z#g!;2gG!h)){pb^x(S9(!ko6(+mk;eZGUW@&ek6DRV5S%Aasy5C()|KO~+b$VB3fJVVC2g6Q+oGbC#UMs7rmO4){%3H(6fu^n) z+^P=545)_HNY#Bcaq!h?;gWV?cPG04AMiN& zHzT3q>uK-Hx4w+L6UlzLr?adJy2MqjWW=Eq5uH|nV3_;$X}54Zgi2?G%kvuAJz$ey zZNW^2N{XqMpPfA=_@62G*C=M04VZkvi&K96I{#_=vi{L%W#;cbPQ7psKt}{li-?Gr z{b@`>aHJ;U8f^$W*48}oU+~MsQN@A$g1r1M2$c{GySlnCCT$~0q1U^2jzW9F4Wm!q zy>my#dqWQ-;iE?uH*R1$evzFmY11k6WFKjj3-w3{fnCT^!V6RZFcbhV{6)K)hp6z= zIwmG5NlB~`=NIPYAP)w}m4KB1bqq_Zd0pgj0f7Vi_b15uRDAd#Zoxs=pdy}tNhFKp z6)i1xUS1+?G4t<=|NZ-F4i5Ry5iebefpn{(p~;_dN783YN`4GzALBZ$Nil(7N^FY$ z(T^S-U}6fV=`L}~h9VA+9VZxngeMq_d-o{d=PN7AvwhwZ4F(qFhBFXI5sKT6mSY6D z6(STTJbLtqoRacLAh-I{-(zFS@B(UFyf_DK6N4~pWH=}gJyUMU5LjI$B_tRr&JkA5 z5@T8uwnBu0h^Q!#n>9py1ho>Zjew|_1hccVag^YJ+*=GREiF~PZBI3-rZtln&IuCa zz7TD5nlQCs+|f`}w7}#BAs4^z($>~S{Cypr|AEb`P(r8j-UllP{l|^AOYf#3G*J^j zJqlF98mJqp)Pww2Fr?G4Xc#kzSZkpp`uod6W&j3vb8|Z*f#*QB>_Bx-BeO=0mf%fJ zY$#r^VIk!N=<7dz{Mg?9c+4RrH6!C7GqV6x3An@nOOXM2OHIyVcn}1ZxcN1z9-Cu7*H1X zP?(Q3rJ{<$dCMUT1A~u`59;pZqzxD&#)!}BuKl>4Cr=pQbsdx>#+IQoILxv#DSQg% z>adWi@6p>eZEcQ-d$e2tB9P6uz!#5w!JF8KDdI9aI-J2eMn*<*(OpnbFt3k2j>yLe zQE^ZN_~ojS53ka|Oj6~oUUt$n)6#Mw)<3}@$Nhp%=g8pp*m=cQVanNXoBgvdDbC@#)CqnCx5 z`Gs1%Jl-0l(XO6VK_EPEazJ=fCt)I78_!Jc=Dy3kkJ27(9o!1`a((hAC`mb5$Ze4tj)B3cEoufa0%@eVq5RTQTp)5adB}m zF$$TjJ3}Laq0z%R`@WJUm z5C|R(p39MF<+1_N)v)|md3i>~b}UzzEb(A+jhtLuDqZI|<*ip{hd}lnW3Pz3Ze=y_ z;VvbIj)4Kwp+icJVkut4Qyj$vf^nzHtl)eEV^k8fMWurIRf6cIO5FW_2L6SA|E3mqOIzRi z{!ARr?krUw${yV8ios+hzSSlhb9HQb9yRy&THNs|xw!`}oFC6-HAw>1eb$ccAaFox zis(DE{NJq4De-LUTYI|`NNB3mmoIs8c#e-9dY+bs$4M&>g9`{2B9y2J7J}~nefv9E ziXh4kK@jUkoricq#Dhq~O%22f=>HlSxkpoif`YkJQsTbH>IwF1Xd4|4ckXCkxxy{_ z#O?`%iz}Zo70rP{!nJmj;I3Fcs(ffA&g{~Y-$}=akEFDugfsAjaRoCy{dt_^eiQ}7 zP+hm|=HmW>YDc(qdI2R1mM{cZcHvX+9Nsm6l6@GCol55lrL0v2y!Af3+TU&Hvc4T)1k= z-2d(3|8oMA6;As&tS%5>T3Y4-ThW6tYfZYD>Yctj(=VO#0PVaL(gZRA3I|bw-|z*p zfS&_2^&m{&C{A}B9N16j8Nodv1&=WNYtzT->+1`$@9={g>ka*aQF^Wj;33({6GsGU zT*hlHP_`8oksq?#L{=21hKZaNeekCYSsd>3LusiG=F0W2z{EgPD5fxcPJRs9Jw-1A zC9W=Z8_a0A=W5I6&o6pT68x|#kRa7x3J$VN$wCk1m6{bXPVeCdT~MEdylPM+F&s4V=(!)? zK#KAltnxGV**w^jQ;?D}2((Eop8E_6%yx(--o>iMiU%A3mX_QdBX{38gA{_#xZuDj z)N!LX7NGzNa7ke&L`#b%HqjIN0&oR_LYkUTs&8nxR_1y#DM-)2UH)|#IGCZBjyA*( z|N7O^+~%h-RLp?>F*r39v21Q_Ee3kPz@P_pF0ctbJBmdTq;TIaD{EEvh8$=&Tle*^ zpmsz~V`C!#7TjimEDo3()DuJ@zqpL2*@7dkHZidUt5_Nx8L5 z%d+Bh`5uS;BZP+J6H(JPBpuVj1kg>Z%pqGO@N>c>V^r!Kp~wj@jW^sF!~yyvCqD`< zLIFFWtHV*UumqIa_J~f;)EYX$g$`u;&ixJ^JD6dj$!KX7rVmmzMP?Qji0>tf^jKm7 zJG{YBw8G56#m!xWtSYLr?|o0Js)*YJB%Y=AAF4Q6yPdPo&21H+s;(QEH~3xN0f-A-2TC5>Ni)SB)-M)sa9Xkca-5ttc;p0* zz+Jm{7c_PvbD?*H?o(1e$#5SYWh_Uuz#v&)AHMuAw9#^ObTlqD_MtM@>C=Ttze&Wn zW4>CxUH$3->+?HgC(%NK3{At5R0(zY`x1W6I}$;`>)!5%7R zbTf8%yhcZr(wZHeoX*&E$~lV3hc#M4ZeKv329zcWw^1Jsp-DlcN1qw)H&;7*))eL( zh}=^~2i%ZX_hI4D)C53QqWb#nTO`9tGgLTS)vy<3gna7z4c2-rT zWLu4%Gzd}S?&;|n{v?6HsnJj}TyMp37mA==S$Rnbqdh;k;syHCr%yu!i!{2dtIHHqOG79`T|p+mQx`Ak0UmO5x@qwy1flIRs8Gra?etsRNkpj>2JdItgAE(xxZit+b})aJiYwJ-4(?iFskM&1~yt+Sy?Sqaxo-? z1kebdJUPB(!!@nh++4ycN2RP-6@|5oKd^8Arv`vgrDKVFfar%m;u$ucW(2sz#oYk9 z!S10~{?1eOqq&N8EjR_ifWgTzJTqf&Y1uJ;0k$KUU^d`3gxeDP(XXQCB(%||C-^-fH6lK6Mjw*Yz%HRz`tWq{>`jlwkT!V%wap{y;8kvc5d#c%*LXt z$-PH=ll|)$O`4WX3<)c#h&2`|*9VQ3QA}s@+NJMQP2siyfdI4q+*XIBOWbufB`oY4 z%tcqPb1-;#tfS^ZeqUQ%#m0)GdkZ=+vJ%Q#*Uz7ElDp3mqY`Ly1LG8C*}6JRDtWE_ zV0}(wc>=>+WJ}QWkdci)wa}m#^LNbs$fFA6Aja;cKasb2Q)A- zaT8q|dGg$x7W;_DWn{cDU?6gaZ|ea^izYl)`oM=;14;+{YH(0cVd*2XWtbW&+!j7# z{#Nq0#!> z2G?EAiF^1E%qK>tI@Yj|5Qy$>c-B;@Ks*?(fDE5JeJUK^05nx5lK_wc&aS8byxaRaYnQ^3~>e(p5in8&~O_Y8o-n4=3Z?dU&=@2 z23r7|4~+?@|G;UvZI9dPs;X~`i|wtg^D$h^&N|$_J+57hs_J@>W(%C(V2lpze%}lw zvA+h^O;7L3=6!$xAc4}=qsuwbky+oI4=(=*+txnWtF)AE$GXLf@qe(=C6%<;vWl8d zQgR(LCRHlFg5h?Y4~4Di{ylE5y}TAi_J)7@RbcI(kd}5N;{5vTkb)AWC+xKej30mW z4xtf&Y5*hd?9LS=e^NxG;rsWIW6Cl_Ok)c_Oxfm84J9w{tKElAV6C*Nwl+Px&)M9T zR-TG&-&rbbf-xlo7r;1#7zA};e*H47ybfP`b2Hv)5f%iNDf6mwi{i^`Ew;T(y|c1C z3b3oNRgK-WJ!rR*$pCNmBmM(?7(=}k|37s1kx6_Pz(uQ1T=^;~o<@d14f(~6k4Vk_ zv3B>B=NzBR2gdMj!A=ha3KJ)zCOvivRyVe05Zxffpe>>j zr{-PE%;zO7d)?dVd0xi8+}xvj;R$J|;reRt8UvG5O0)F_^L_QTH_*gu5VxHv(( z3~Vpai`i3K8(7M2PraPj+|)GqHs8JU=B-MXZ-hG$@_IKW|QF>~SrQ2qai4O=8kJGa;V@kg2GuOC^NCD@~%!eC#= z$UG_e7Yy<0_!LO=F`rSZ*$2tV@CmiGzi~o`K;DB`_?-Oe^=o6CENXK>FgC*h_b0TI z%fnPMeuY0o)54CmI1F`2280>j8c^fXD;<A6RIfonQIas z!a76LQP{2=K4S-%W|-}Qp#XcoUA9awP&k**gyiK-12O=#OB|=c zGzTv&0wC|@%g*MLxOjlvGPuEHcgW-U6Meh9trs#|L6EOJbN8>YRK;Iv&B`8%nt_4jPi7dP7p858J_b2TNQ0_aHigUp2=+|; zF1PGmVWbnop7XkA_(LNjKUYS=gbK5U2qz~#fz;!k8aF&A+l#H5h7)h8Mb4ZVRuBUX z0!kFB1l;NP7KtP0?>Duz{Z4v~z2oL0{d(BdX3YUb+Qtv(=Hy`F4!#_X{RWt|fc%IS z-BWh7F8~&$7dF~q~13mJE=)Lt^_19#uiNmVN+qVlql-LGqwJQ&X4uGa@ zWnlp$`O8+*oxoXehoBlIi`{;n*1*ldF%%`c1?S>~NbXtUcMZTI4(1&{qd`6L;^j;9 zfcF?(igcB*HN3Lo%6j)$=ts-9NaCo=oQH)>s_;>WE;LCttY2fKK(c#nt`&l-K_sl) zAH+r&{5rQ?aH#(F8R7PlKk2=b!&NdcbymPrmYM8*vy{&Yj1v0l*h|K}%QRnbj@^mV zFwn1{8xxrk+fl`suNDA?wY9aNO5RSr0y#0qMD)E(jRg>%S7jZZRRCx8Z+Y43Z5L8o z{`9N-$|t9+G_cR0tQ=BMa575pQ74rTzM}!R4ulLwIxRJ|ddLCo`6(Pk`}boDz8DJz z@K^>V$GCfV`~>#IsyvK=nB@_SEB|PG_~H*p2(TndO|15RxEs8_e~yZbh7=*CfUo4j zMwm9ds7p)S60zPFE?&G1Pz1ofxV$`2>SC{SE9nB)LtwNIA69|NzJogl!!)f6lmQ_n zCnsk`iubXKG?*|f!U57wVpA35Vp^I)sup~z1(HWF+jinKW&d7>1JoS`8l1>+y?oyv zZ;LaqF+F{Unj>)U7Dr%1V`IdD@Jst%xY zFjNko>c;M5S;Gb%35c^F>#i_SUWnnIt86~G)hdKR<)tqZ(GD}X?t-BYiS^*?Z=}M4 zu@cEUVa!^@t^{}p6&FhbQX>{(Vq$7fgoc2#;gmbh&kq(JAd;J#8~l@@G;feR7Oc8* z^xwkp+1lC)4ywzh2u&{_K%vy2u#jI|ybBwbm_bq8WR)+dd%8%o4wExXJLUGC5)#6? zGw6`I>3YXUA&b34M|Z;=tS;OBoL0XW)D z=l5jh?6?4|F~b+b9%*E*udkciLZ2mS4${OyLyz-eQqmS8)lZiUY7mi>=tix=3X=cS zHuk+cnUCPdu|+^tbeEk59eCz&qQ*e6y)h_@)Kre-1v-K~bTb5tq}zNt7K|{Dy}Qwh z<>af!tJ$yNf-tkNO0D=zCb~)KtZ%Oj-1lO7bkrbL`}qSqzOVxiMQtaFX6iE^X1oto z47a#ZnIWy4N5`8UVe^4kS4V#D^>CX;y?gpTKF{iwH%20FTjtcb%5F=X8g()nb^6{Q zKI+sz^K%|JK^OK<6Bf`6;06vk3I{oc+_ zYbz@kumRY@%#UTx&_?s#xP!%^PLM8_)foc9KXA%DfcLu8Z zGq~Y~HnbsdgRcq;4?Z?7bG@z0dJjuZ+1dPNQ#b-@7KygLI}Qv@*6*sSu;g(xuFWx~CTmv|SFEjew*~w6P(YO+6v4HZQ*Wn!bdCUMT zBPO}G2wU`w-%N}VOLN0Y9b3gz=(#XxL&>U}ZCjq~+Rw;%6)rlkBuq>_#pf<*BuU|0 z7GNb?^VFswKuU14Fh#WxQgdn|WuXl*!KPI}?NAT4BfGojdzh;g<>f#2_LkK-@qu!W z$HBvmknRaqv-KxZ6?GFN3iCd+KUe|qu+{R0VmJNZ`~5G^1Ac(LvEf$cSW!3Rdfizditwql>Wzq>mtFE8@qAI6V|LKMxwjbj4oy7~C@PB8$OJ`z7oP4#W8(n`9M};;uCodq0nyc!pg2$+Kn&Jbylc_bCR96j z{!3X3g}c?6adyxGo6BJHhKmokxy;`+={e{xd4MGl82`#818ckBRV#LL4&6umZzc91eNGk;H({c%;Xj9&}id{%Y( zb8CP97W67i2Ej&DU;Fy?ox68~3*_bFLts%62-Y`m!s`Q-8dQqoQIMj< zZ#!H(a_p6109ld{f!*^gR@k`jDl;S3oIRhpc;1qU(Uf-+32@H3DF49+WEA; zC}OW83I&C^^bI8l#=Es75jUw z_3rnd{eJe|@8|80=c!h>@9X~k&fz$Y^EfFxg=nU97x7qm1f9t&@sHKe4SfaEg0qRw znb&Q)T^|%}6jW^wN=yC7%7k2G81#f>QK&)TPMnpdl)w|Q*+l?4xc=OhE!~O-zE-@v z6b>7O!WF02A#fEAJ!fB87US#w1N4tL+9L*KlA&&%_DHIep=&@Du-StI$~YHz!R3r|0j#g7?q_))2@D%oe0 z{KH==KA&u~lL0Z2{i@2xah3Q!Ou<^I(ekX1XCnCuo%zKT$Cc%IlMKC=oV4pISUEc0 zL5TsU&assW?2FKSlx+9zodrfmKh&H(ahjLhGpsZf(mrIA9<2a{!6Ca*LeNM*P3SGpPFCl={s3TPS_65l1F;GkqF$n22 z39CE)jNq=Dn3OXSTvTK;bLNg0E>gl3f*{4VAmpAty)-=hjow!%gkh@k@&hPo-Tx%h zNlHmdQjJ2tzOSoO>eb8m$b{WcBj|Pt+uHzjCn*mNTr&T}(UC68|B2hIuOZtd+09LD zRr?XWUrx?RkrY_8J|t{F3WfHJezFJ0D9*!(Dc>u|Yrz7BDdoqEDa^{!H%jwgyjX6o ziz0>Xi8<$~iaQJU|HZ}_PM9%u>Xo`6-5y6N3>k{f%}ttsA>T1fBz&qNb%-{E&PH8n8xW5Fawl@IahlkgfE!erj99$!ClS zUIv-*U?>9_#Ed(jo9sY=hii(LgF^`)Eh=K{aGt(tq1|8(QbOP_^pZSe*Nrvj9}#H* zQAIII-U0Ch(-tG{K`qZr(QFTo4{zTxq%NAcnwh0u^&}b6d$sz#pz}IS>!@K78gHdZ zJv4^){o|oOu$!O-M8&LrDR=i2I;OfUtsN$_?WU?@+lXJ~k2))%cmq+YOT8x_BWq~r zCv$CGOG_9ID_j{YanK%6LWef2!;?5doabvt>N>Nyelp3=dWMgT@4bkKSHR8Tyg&VL&uN% zkzB95-?1ejL4kVA==`77^v>UIPI>>~^~MRu`noGjEPY80(@r_jo~tO?XE%9sjU7#} zwe?l1j{LwE&!1(i;xnTqpLW!F%9JVE$(4V#%(=WW4aXEnkhZps8#a93aM$_<=bBm( zp>JgEI?dXkbC5a&Z@7c(rB=ZmpE)1kR@AWca{b1vS{c1MoxHR@F7D;$A55;OsO;d- z#ylN)jS&jsWAl!j2mPKDeLJKy;n-cbIrN4yFy8`&5;F`6At1A_qsKbgpVC(4k9$*AVQ?5YM;G$fJxK?H- zoW#&BD=*(mMWui0pHoI)GxzULWf+g>#lQ)!2=cFAtaHW3h zV;Y>TC0m5Ch|Sasp~@tw`*=1iFFTf3k_Q*o7e-WD*8A8-2-cw)ue zlc98t{F!gxbW&#Ew&e(Dw3CnQ-~Y0%?!m-JnIT9;%+!~Vvn%IOb8$ey*{{`M61wFg zn4|qkY9VG?bq}BI&@(V-K}odj7EQ3j{e^`e5(aaDd1=#}oanGaf`hdbWu+?y9*VsAL@inP{c7^_R-4H0;iwu!5J~h(Z z{6=x{QvHd5$AIFk1V&2umOFds$k0I^{O4C~Rz_9i5$DBB$owo zao%m;kzwx{AZG~q*{l%eHh8%O>hag8XVgbUUc8s(M-9+u9Y~Hen0y^@?}shh&UA_`pU|;JLt%GyA;Kg zYM4H_o$|SbLdJXoC6}u33Bx=M)#?F&qDV%?4qSl^#XTrZ$ARI>75|n_@2_rw$>@DcHr;s9LmJtO7k?`5KXay+ zV7Nqm$NDPz9sqG1#}ZaDcP@fSfOTWPTa4qo{%xM&1HzRk*w0Iks}WlQ#a6bP~Xif>D8;3 zAjr$h-w#YuKIcB^7Wz1*T9)+gHJ*ZO{CEk0>?L1)*nMuysE~6hFEo2>0QJv$p;`9a z%tD*i`pE0cFTp3M$xFuQ>g#XbA>DuU{(butm6c8Zvy~)9(3*cM_CTbd@O7T`!o^2U zeUFEin>nbaYD=k^VGG~~@8LPOJ1fbj-pCU%pIF}MGDD}+- z_SCv@_vE9Xh&ME{&}rORN^H~zgq;sP^Q59lnPAFD>$Xq8|6mf{aBhD{S-PesLA^%? zx4V3MG~&UO7iRjwrpBf=DLNVpf_k-=yTY&EtN%{hv55Y;?n=Cp&N&%nO&xWPbmIt$ zWz5_|%xJoU%+wRl_l80y@{#Q6+$Cq-E*S1G*gEHYCh9d%7wbz_Yk5|xefk8eD{*vs z04t3Q^Z!^zcStCL5uxUQsTKp+6Z0fZU;2$l3yl#_`P@EnLUp*WqnqDE?|v$OIeA@o zFe!Vmc;|sb^L=g%&`p{@e@5Qjt&Rh~+S|?A`#VUnq_(qo`>CNh0^q;iB@6xhVmo|& zG)uFl>12@vnH2c?Fu??9oU|!K!=quWA6ooLYcq!sY5dHL%X;7~yupQ4Wt8`DLt0f2 z(oyUIxPOdVo=Cz&4Z{Bd3{XmK^jmzdR(jZTTy)j*2BQ^UQ&a~D@gNXECPUlB!`|+2&;=3HUVM_v%9Zjl#F10k z-EwZH9%NcdH~dHZI|6TP>^(ixv5JHM`rcjL%rhVWWA8W2P0licaJX;*lsR)p94iHQ zm3a0sb-jDk@~u1i2jAB=Fj$4i9)OSedjz8us{;dn0MmbpSUFkd#Qc!WWp{>TH5*EY zJlrcKtmvq**0hnNDls!$^iUUtAaXhRKA_~3pL39KKm!8n&_r|UC^YI+)qcLZ)stLJ z=aF-9G*jXzV7G2{8rrw8n;mqSoM1IB_QF_aXX60_ZZL2QYO`;jCq*pGTb-(AkG_2e zYinQ2^BiSLEW$7hpAY5Uy}?6M@sSEV(@lo6&hUz_mls}$V{L4bO*e8__~MWH_f^GDir&U2DtxcjY%$Rh<;Bj5Y;`*-dP zhG0ca$@AxzM;yAzko?)R;nvnM@$n8;R?%r`vl3@?8w4YW+DA=MF)1Y_49(A=?H5>4 z;qfjsBt*2Uzpy%T{hMI*oSCiV*CR(9?KR=!q%~G?a$-BvhfJle(268;UI`@HUPM?t zYJ7(G@4r@7w#B>bBv>j{9QdC1Ox?i-2A3K65^9lDkAO@#^v9+bZrPZ}P~m>hjFwJz zknh%wey=jmZ)W25`wNYM>!53J!*6P$F?;SkBUbU%J_unV71~~~)tZUxcoxuq?bCi* znke~MZ5q7y%kz~bWqT+B`1m6i^^i7lj!uwWJDgNU_PTfP-kU@HmcMzaEW&3fjB#`K z05gu0F-lWcRJ1zq09=4E;rl%|b&^EZ!bt*4pw86la_<>f#*NZaR6nAn^1{J{7Wtypiyb zklT0fjt+Sx`=rg+Uva~ACXXP?0$yB3!hLFijG^jgGW35CTz@7xqHPAQTPscrgKfzQ z=a?-c?+wlCXSu?8;>5x2AIe88x_9B?@RAAtX;g&YCxFB(w1S@ufa9kn}$Q{)?k)55c7^O34 z)S0}rrQ5ws7dlV)5*uIhW=i~~4RLGNYWAM5nN%m0;P<#D2%QR+B#)}AFTjb8JFrE~ zWMg(-aqd`Va4;z38#>5rA-yD~zX4HgYG~kjiiV=}-<}SvRzb2zMgx+;YoKMzGn@WE zVoc0)7P!(l<F=nPWU~+fT@|-6O*dn8S8*`U0XrS@`kuDx0xmw{F`eA;5g~8~s+S zx@j}1v9U;`voGip&r}*;wghc!m~Zv35sO~srz8(LpsQ>C!lt@r_R@!Yi|yw8-JZI@ z-(Ou-Rd)FNIB5iCNGjmTxITO{K`=2Rr zmKauGLB=K*Njy| z{k{GT^XFb(J>=a&(Tlzr^5n<4C`H-6VzFsIefn3bc0wHLyc`CgXxE5vP$#586x<5y zb$V=|;AW(R@eGGP#ykNIQO@}T#2{>>d-U#2Ezw-`++PwSM!b~gBDkZ0XZVh3otJtb z{aTh$L1s@GHBcEcn_0gS@1MVO&LLTFa`r~ojxHf;UdR1;O!5JIA~pb`L1_wnFe;%V z*atl`dO@B~6yKC=P}Q+r)q3?vB=sA{#nF-wfz)3IPfPL=JJV4yMdASLS!hMF3T|(O z9qoPH*FATY5V%x;R0P|Xhm0dBjzU9?1u^Lad8kWaO9H>jL8af=o|58%+}zd{&6W+K z4GhbtWGWzzj`^(&N8yI){m0%4(=bQj&8snV#f==az53VvMI;4X6;T@kdTeP~fC>Dq ztpmW&sM!jFL&^K}BS}k0@n;t^jeDHou^Ft+;fIO8TwHdhr5!zTWHTZUezLaqs6R*P zXaU4jP7xYL>Q`Ch7OHLvSRsiP|^LZ7Fvo|g|_;nJ^Psa^z*w2COv5T zY=AuUGIXX0azCvq6|L{k9dJGmA6`(nbC9Rv2HS(DO6RWrQ+|4^j=!zF{fl$KQbH|; zWe{i22lo<~HTm%F9XE4#a&pY8y{#Lnc?Z1ZX-6N^){dSg8zU>j@%y%$+SQj4hP=ER z*RO~A711Apqhk&Uxbpp6W5uUe$bTJzlYs=#*ne|!n=|JrLTU+tkKWKVbvxiLQknXo zbN{06AD{QSV7KqlTO?lADabmUNo{_8wG<`9N55z3ua*EGJY3^!Umo?56l!sFc4#;P z8YAcUYimr5t@cUf3(XDt9AM^90|)}uANMTEbFb$xd__Tq)D_mh+OIQfr{zGZpJT`5 zX2r#7E6NTQ-$#=8K74((OMQ%fa+5Ckmp_00;p4|Chn&EY3qjr6ji(Bjoq1*FVH3Un znO2`z&@L_&NH91ul#IsVV{)eaNKhs$KSU)|DN{jD9<64+l zgV4RtIV%oK*x9WER7gzh?l|kcvTl#JSEX9w#PuJRsU{AzeR6UPKv6(aj~j+dNr9cS zu7YadKkWQ0J3Oq=`c7c7qG^QlpuE=f7EMmX^Jq=W7%NTa$J|vm1Gk9|T?I3sUY%sT zDUWg$y~24ji~Rh&f1T=_tWAaWA7n0;7+=4}e$brxAFNH$BPK177uwD2vOe-MTmQK} z4n?1OuNW(lYq5;2-_kpBDw0n;s-y)^UY?&{P2$Zi+Jxrx^)`wZO!Nr)sI8|QegquL zka(N!uw`bROO~v{BZuiV9I|K5oarIcF%^yln%81=_s({jiqayC)-A@05ZUzGa@;|7 zGq7bM!q1O+RSYKOlplYT;8p|4(oGl)9t`AKcVm6yVl!!p%zIXv z0w4MZ*0J0UHj|EFbJ+H4vzBA`VkBJvu0}HXej;j(Y;mKa%Q=OOqC4X9<;k~vV0@_T z8@slsvd^Zr))V~}?e5H(Pnd(3_j+HaC~c-E86#F5mX%xQTQC}HQ(>*tV?*f6>s>6; zN6)|g41C2Uz^*Mpve#&kE~@nsn>eQ_EFwTBI7CP1(GydJp)mY3#T$$3Y5FO* zu*|r)b>;VWy>AA|{`)zL&qjtyS+z(@)<0I8{*F-rlp3plf3_dv_E##&|4e@V%g+%E zz`2;*0oEQvb5oUQI11|kpa*)A3t$oii1PrrRDS>^lWZ!Zpe6O02*Gsl4)0z zPQ+CV_6VB)aYe<47lnHdoQ+Xf4;e}TrlCZhW=&Y<8$f3v;w0}dBG2kLg^gBm=7zo+ z54+tD_QlwUn46$s*#@-cW3(LdgNRO0)<=Nvbd*bjRrAey#NnSw4nQZQBmj`#Cad(u z4%~($ir_>(e6Z6Spv-aLbOcEZ-@*sT2!?5QkV@)Is&?(^#Vj}Q)&|@C7cQ(=xKM=o zRx8*jV#8^<^z~*Ow%pyH5%;hNhd?hIgivct_=h?6_VyL4M?X9^+unWwC?AU6T3dZTx=l}u;i(WeQ#x__s060sIu5y*6t2Bhk#Q#+03%L$OROsxzsG5HQ~ z&ym9|QBu&yL=FD$w{cX7_TPV3&FHE+aN7n0K88uyW?Wm_B}`&}m(> z$1n9EL$Y}tObcT#eg51zrX<-gm*EFU&M)&NC2chbPmBD+N({xW*8|@@T<>3>lPfq0 zs?HK1e&wGzLm|s@n*Ru7Z=q~0%a~C6a!aG43g{d`x2X;bhxBj+ak`g2^3A(9%Uu@JkE*{X>^7 zr8&7;|Ct);)9>yA;#)#FlM<-zzkm9~1cFrI5PG*BVqO@=H!qP-AYLQjL9SwDXwlGP zgDt!cw~-+-Zf5?c@eUos=3S3R^#Nngp#H!138NC`<(3YUe%OaV)+U4=NxSN)J~C`G zi0ILyj2?b|a;$C~gP|N2TnEma3B^M*b(r^$fCrz(T3c)B>-)3w#>c3tv~)1Dx-XRJ z;ibgyw3SK$s+n%DxfyJuEO2R@5=aiCvzm&sJTi#hHNE|RN@^r_@7k4iLj7Md41tL0 zzi&zq-;`WM#%peZX23$zfa>ch6Xf37SakoWbE1~68Z3_iLqlWZSDU;Ph5skyw7`9^ zzI@<$McI*`jzxd##!L90WGltmNFLG3`)_l}?qk}2)DuFfIk>Du$KNLMYaF-q`5%B# z2L=?T5~(IqF}WyJ99}4@tIu(B+f-bSmxh<^^JwdgsnFtxEm0tew;9g$bB#9Q2+f|% zRr-cn{9%FyR-@sp9avE(BCJP@ya|-BrsYCdK-s|&Eh8E~jmA>r>(|nY$K9r{N*cMA>)!R1t@FrbeCgT2ty)F$c* z;pn(vQ^i*?l0FY5!Rg#wmJ@M`2qwStWhMJ;7EfL0sYjhD9|?qCYLdJ}OGWIe*m;F4 z5_Qp!0EM-~Oa7r7VFnUMQ~539>czLPPpaa%MkW;=34k(-dg{U3ZyHUgPy_+4oP9lK zMg*Q!!=w)H<1E*Yfx&2Xl@dC7U-*IyVo>g|>lmK)Aw=7HAV%3(Nbl zyO#s&F@kY*^w)y5lMk;(+eClNt)OP;tcC)ZZ^zUau0o(5ix9JEiz=n^#=nS?X+>X44iP|c;!ncmUT%4n_r6mjXmfqF)aDQQ-m2uQLT zxNR>mJ3S!f`SnihWIjD?|O{MDao ztxrs1fg80sZ7;8{(RQq;KIt=#?=LMQqrH8EioUC-zdxwJkiY(V(l;Gh3-60&q-jeJ zxpkBjU!gbn$MkzQuU{9Zd;kw56{}jYfU2#n<pDTW(PBeYAzlh(FGNEFAW3=OB2C57+S0?TtB@A#!dI8(%f&@G&C!&mf3tBC zd6n1%|| z(fK@2OnVz=-TK52$~<%krUI!Ig&2G<5)6`sxBtAwi)q1DaTY0-BbF~0cMHv#bE)&i z*CWXRXtwaNwFr3?Pahliitw^)_wL`clQd{pi_-5bUQ7XfZBIe{!-Z!M1EcaF(+dLq z7?kTm=ETcS@|wGL?_L=C)(K)17naGOOrxTb5rVN11Cx3t-n<}XR8MPwAHcV4(3!pFGpaX(u}=xyUz1#$wQby)VRILsJveLU?+7 zv(vOuGSj}6P`f3bJBNUb!z^k~SX$QCrHs`4^)6kKOVNA|<5XY*G}O%0#`;tn1H&)tMo8B6+^tY+9n z93(|3Pf=?4G@kMW(4@zPGZ^?Ym0Ho0K6WL?hGq}@4-;tb9m|amu30c)!bT`gBAmz( zo86~uptZt-NF=Dr$}GVlTXeb!xqNC|Q&?j+B$#_frM;c$agly@5z7@^XzWxk%JrFNsMT?!qN0V(kbsTDhl`<-;+JqT6Ja9t zIYJW85)=`jD_Gs2>k5#Bxw8D`99g%r}O0g?>*AJl1xSk%_HCgE#NS#)=9T`IDeoykH!izwh`C`nF0kQvEi%${C#O@iZ#=Z{0p5Xug>-Y= zqjC23-XE3voM`#_wdq)e7^NVSA8tNSP|(k2JJk@feZVhz#oTa<2vi>Q8IWsN4bEZn zTMl^+LhTv4zAD!X_a7V5fm`P%cYPKt_-nubOw%x!i1&^-0m=eZ1T!see(66^{E8~Q3EsB0 zYcn#$`5*Jlhol@naWrOM5SGl@wu%i|*smz5sH`6%6C-cBPe;21e!t+%na2oRblxRh z^~8(BlYy=|Zm)A@=V60xX=vrNhaW$`{`JJy)4^4$$9W^& zlCm;j5YohzBv#?_B&ZWP?pG`FBU_j7lo*-u-h|VSJ%e1>L{xUU2l5AmMnzOvtr+(- z?V}~Bl^b7K_>8O%T?avcq$*KsppMRI4qlH9YyWglWYMEQcJV6i6j+l#iALhqfSZcj z0EF}NQCeH}$P;4ZdyDT%Q>shlqu*4)RQ({ib8I?OxQViR_Kxx~=lF0*6@)8$=Gwzx zh?YyQhD{}~IytG0<*RQE!$~A*`L7nXJYChu9GjG!d@?`39T79f;=)!GG{h-U{G&EO zUq@#+tMzE$H5Jc&U>OO^0ZyKfd(o{6`Z<31$ti5>&V&|9PwzJ~fD}z0VuwcKglNZs zuoaUqkY~iyZ95}a)LM`>oYM6}HvIk^cORx#XpEONWmHfQz&GUkyh@`Hy1t1{+eWbB zdCiohNTGkPjI;&MKq=uw+>gC_C57(HPQk}vs)vUGC|mmZ=-o?S`Gj;*DhleS_(YtP zy0p;dI^A-F5^9UThYo#X=Oj=Gg3HwIyaQ&#P;JEc`91LdpzpZZef{sBzgpJY7+4qAVu|#_v`h_2b-iV;(?kg$zB+2+b zSute7|E&dZ*z!0Bnp?hhxl&jYhYIj2@@e^D2I@LQye_(_Bn}bhw0MZ91BW{?v-1-r<6+Z7*aH<*K>_yJ%U& zDhQ{j)Z_U?9jc7rZhF=REneS`TVwAWFKYzyL-}9XCzadGRfg0mU;|%%J|0VieL}03 z<3BKbIE{-7_$nKE3c~94pMMFSfb^B~pw4>9bsE`a{#16aND5cNZgAw$;4}F834q@} zfJ;n%2?zU0!eN$KesR6}6G<&N5ok+b+c^hZ1%a&;I5fJ>aG@e2Fp4L zqy1w@gW@1gE1!B4RVdof&Een3_EF-3Jb^F4E;qK5>0H_3J?^v zOjIlXOOYAmSJql%30qld#%|A%oZT-zgW?}eO~n== z@^vtL7wvsw?OM~n%EE@|NGlI8ke$jhWZm;ry8{o9-!V~l+uP3uxuJ0J<0+kpXW_&QJbwzX26|t86r_bi zY(KCoy?RpaIZhI+uyBPj88BOnm6kt!3!k$Ljznz>)_*?!{-eO8$Q#rg%$cffF$IF8 zh<(3)4vigBOUiI@g?en}yWcmSJQ=|&qz zBPoEvJ!M2IHMNo=kib7omr0MF06=r+5|>7Yy>}g@K?i=BiTvl!b1=Pn_im$7ziZ>b z1Yu~|f5i795(1q-* z)RvEjJjwP2aorR*(7z6B7|0VvBd9tvv;VGKxz1(#=2>z6(xu=0YXk~or#rxdKm=o_ zV3ea0uxy#h!-`RQdJAPc#qVP<&krmJv{Y1oTb%qtZ-medXo{s1c-j$3VBF!O64XqN zUHsLISL(mNJkvMRJcopa&N;JCso2!m9}1CvlMQ{K5-5B|Eh%>GBzE&TeZ0F9Msm^n@Peja5%|vVj zEs%8kr~nN3)@`2--6;b;0*1%0UysvNWbY%Y+mTm>_k%FD)kY=j#2Ta>7kV)vZ3cyg z6Bkz&W~lR(Ywh?h$bCMvJ3XDn0>)Q5nPa~2a>B7AM}mg@N%+F+pxc8V?<_5yx@%YK z@pZ9j@P9bQ!th);f8Hy6(z~T3ht&<^&UR}6j_7r>;7Cbx?KPmW zA>#1nvQ4CX5K(40!b&S7K<~N2bdPzADLXlZaD2si%-Ish+^_l=(Ya&c&hXTKl)H(s zvCO>;Di_TKd&Xwax)Sp6K$u@*_v%|uah8M4Jp`4!4b4A?zY$Yw<7%y+X>R0oKGY9`ppZw-fu6GBsfV!W$O$svG zotI~_)7s|11DPTRmbrf|z{+;jzm$8nBrJ@hMf%+rFDBDfrF4mrRsCdY{FHI}PnS)v zTq*8Q>~vv?qQh#TX8|Y<^Ld}T0h<5&+d5G$LGk2e&q5k{E2hJST1qqM1$fLqHF_V< z>MSr0X4E2E^a}%Gl zCHmB(j~SFE#L4{pLO08ur!e6@GUE*ywMlsh?xL@!y#Y&Q;*+zEl$6b1U{KU(U@?sS zYw6#)T8x0;SaxjSd;O z0$7&p&M;N29>#s>q*GY1&0cxzDGnTn{Z_IaFl>UTx8V3e8aBUwpOm9VnxkBombB)h z!V3*@j*nknrk)5-HX-V;sa4H9L^tjzNCVDr`S~tv*8(Y6^SE!C4sGmsM@It7fNi(< zwR@5xqo%-Zx!Mfa#?shevwO`JbCi9X8&a^Jw4f7tO6z zqBh8_TLj@I{uLFM#R;*xpoTHugLCtZq$I_Io&6VZ%oR^Ue=yp>ve5j2qP?)UQ5qFw z;|QWeWhUx`jBDhth6ZQ(Y5Z)kq#NIdZlo={n=d+Bg%90$cO`S6aGUPu^_gg3^LFhq z>`;5iN`D)3GqVnI)p6ib(7BQkJWb2d-bE?&p3npEL7e9Qv_c6ll)D9pDPW(4HZr!7 zsN!qa_A`l9pU_1B6z(2(4BQVR@nc|41o8uJC>?Tk38Jy-uMVGYUBL_*r}@|9ce6Q_ znJE)>qWOKqy7lW%D<}o~@WOnI&Y$)TE#ffmW*flQIPe3Q5wo zd0p1`{WJqa7r@h+rQ|XRF`v~wzLOj4pi6r|nWG|T4j7OZ@p&HZAv7N#2|Z<{ZC|4d z;Tfx^Z)a5i7dN9nChn`)_sby_tU#t=$$`rgvo3^x+`|rM7Ce5LMgXK1;)Dy))KauD zuYMu!I#r<*92WNBtBr;#jRt1{t+RxH8jbhNpBnU_x+Wfelsy02jb2)V1}&zrEIIYh zKaVLrI5PYTus;!bWf*P7j7|bYK_&JE$PH)Sq`{K!Cne5rJ$i)Ti?U>FrYdy`g{5dS zBMAR4P;a8IU8;haOSu>$lphvX~d6ql__!49vNzPmUBCb7q z6*jJd53Z`71QYtzYouN@aWN>l=32C z`um5Hps1=^IpjoF$+vTVGKAjp zs7T6uoVvo3pIM_LJ_(jxIQjm5RR$Y*H{qA{poDI$n1=E0?n&d9FE_VPUQ<-p{`>2c z{HhUbPhnEM;`3Gtg{7lXFc!@)0{W!5@=wVQukN5MEHe0QmwWi_=QzNewz&Oy%Ho0dr>_Fbs-4Z8n` zpG`SY6#SLm;F~oF{p|ca|D_hRYN{2dZYn|!1pPCMjosRs1_td-Pln0|*r-^fC$u+X zGL=Kr79}N~rA$sZiTC<;M8j;p*y23~t`_kd-{X=}L<}LBsB2FzBADL^jrJ_wOr~ z+h{5pwrD#fK=aWK8i&jw*sK5bW!A2A9ze#6>3Y44Q^T3^QjMHNg{+MuP)W=4JMZEl zY$R%`nV5C1ym#-!X)EL{@4jWRY0$0a{-K;41pg&@Z1qeTRUT+=+Yxo%!t-NnHE$03jNW70>sbwmf5IfPLYw^bVk(Zj-y4 zjP`clqUq#T!snz~@L7jS>Be5A+7A1QWrfB;H+xQ+{h_hZ({fst$_?hVhV=*?X0Va$ zf-9~5iCL#JkNfo$G+98dW?TOAHf}+^FWj+kF1ks}x_kHT3$v+L4WUyLLVMeG5vER? zX1(K+D|{nW;LPoJQ14yztGmDMdyKf7=5%&8AI6{d4l{S^J_%7eq7QsAEw-zjWHN&K9q$amviIZ%FqwIf&HPcE?4F zz0@EjYxrqRCvLXYejHS}|E2`;YJBTr*+cf= z+?ZLzlRBr&g@595w0mrTAhAxt#_zmuukP;L7@SQN zXV#OZvg*`iMF*8K_VLO!9n+ZGi>@!~xU|HO-ezpHuY9bAh+aoKsUtf0E1qID$^bj( zMyI&WEsp$|s~)mXV;n38+PS7dp_{Mb!}omyz3`$%);s7(M~8K53sY!sGHVgUV^Ltv z(&O~`f&dT0LTk;^g=Jl0&_2m78x1Cb!)>>UJgmmb%X^pkFDTQO(pOoBa^|L$j%1VF z)wZRlJv8TZeG+y^I-{T<-QE&~ghWETRYa#K{m)-AZwxB>E;Gb4%cGzTcF95FED^hJ zdixI?2Q5OLo4;C9-o?wrIk_&Z)@X4Z!H@QTJrh`PdI!dFv>osR+ zy^)=JoVr_!m?Ps)blWa8np!N0(=;;b5o}A#jFMpE%JWp6F{_VRMz#I3Z1-DUL3+_p zibnq1?UwiU^%Gh58@X~?uFO%Uj83n5<<82my+(} zz*zazVJkDTvzu3U{OW%SJhBeX=wugs!hinwA)2qJuQs`U!?v-)E`uvS?G^=pWaipQ z4zph_8~St@pCbFy;&(PZjNt1I`l@OyH!*3h@$4;g^oCIXZQEN<@Jlrr$QUG(iKU?$*6_aH9S!$Mr8^R@ImTf{k zK;3x3#Q6$!Lc}Lc?!w?(h(Y$N+(4T%t3!&Kpz+|8dGoY;xL;33R6_r(SxTXT54ZF# zLCsHIFkAMnm!Gt;T#OaO)5l4QT{hT?Rd;wxP$dVEwZ zdphf|GcUcef&y;8AF_{-rAtTp(Hq+jAI()W{~(I!RS!EDGFN5Qe;R~k<7e`ESe|H+ zKWj=L5Y5c3RB_zZJNND__@!a_6>z&bB`?dw;z4{VKv>wsTZ#i9d6wO=p?5*i3dD~^ zKD{7BAjQsiY7mMlhMsGdf2C)tnA`kfbw{(lC~ zFl=Edivk1fcgYI^AA04nyN5F~8R?Z$UNQA$dfvnb97J=Cp@a4vKVDj)pslGXxL*W4 zLLY7@OuW=OXyOBMNwc5_*UZyR(_h|Y3zM}U{* zeo?tkzvVn(Dl*r^enJr5;t7#n-I_R$Rp*y3K;_PV;S6{FvisWz;CRW7`4B$6&I2EPi+p@c!%RCXrd+*7x*h+bEK`WUlR56^g0wD7K#O)U zvcJ3t!^I@Xj=W^14d$5D+AlxnMV8%8s+g+$`2}&5TvdU^S10c^(`Hi+mE6+V)(cSU znq3~0kvlqgR+@>QOMms|+Gvv;#hBx}%JIUbIlV3Ja54ROQsx#UElfWA$C}4Fq50 z&V1Cb#n|J*MVg@89B?z+nOoG&PyNrgFihJr=6mLFh8pt+b+Uu{PlV8crC9Net^yPA2qS_8L>FtXqT(s zJo+p4#)79KGzXo&tK?;@){G#K!${K!lELVdm-|HRWP0hDY%V=`@J!%~(PCef2j3wc z8o&f3h|EE5aF;mRT^xtnyczv$`NN0RSoP2RMdxyn*iDv!oJAT%9d*IUb^7$4*l6t7 z(b`dc@mK1grjEpVmCg#Sid~d!0HwWHONZ~6@y>gv;Lt=8`$f02Xw{fM>D`0& zBo>Z-W3hM4b18*O=W_17`GfYPyy^R(?YE1I{~|t6iG$4TR(8b+p6=|#ks~;CUXhB+JySjEhBcqa z++R=-1;F`5Z3?p;hu&TkEdgn*Mvf8n`s6p?RZXIphhkywg;4Iq{D=tGLr&SQC$)Ji zfB(IRQ1W`j4iq#)884>Pc>9(wg3KCE@pR`-Xi!j+?)Vc=R(~xUVUccNX7*$C#S!m# ztw0iwn!aP|ys|m($v@i%ZzOrqoPeG{>yng60Xr`rf6~QlU2|#DPTOTeWwjrr^|Xz?u^=SmA%2ug);84FGd5~knqO{rps?^oO^w_0 zw$BYa=CaS#JLT5UOP{VmCg_gGg(G9xQ4^i`r*k)TsB{ySViWXtvl(W=Sz7BHwu&nu z|L!|=!Ah5f6U@`^PSBVwk>b_6&`-GY7DdSIO0~AsTaJNUN=t1&Wa1Ne>Xehli~3oy zf}@hH-~anZF?>B?>%m8FfoeDbpb&jkRU2Quf()&wsX6{&;Xg~)=#EE1 zN`_RKY4m#3j|f4KJ2c+b+BvsU(N^$VV64_qczvrb-PCAwbn}`{<}&e8;4w6Cce1Vs zGB6ZE%An+xI;1dK@=w+~!rvZg4Nc^M&@#?0J zEPG+q^@{paC5bF|I8~ZcX2|$TvGje=5KrC3FLvCNS5+M?U))V5_NpiQsW9W?XSyVG zlMH&nlkz+4bQF-scIL35tn$DynqM0Yq;s`L?UWFNYrGL5+E7{vnS(rhy&!9&_`mbl zMH%5&Ru046A4u}nt?}6W&^AA45TXYJ*-NJm^;q?@0UvJ+y5Pefz@{4O(!;|J7gLnP;rVj6v(&ksm=?nTl}EcHuZP zLBfO*f%x$8;}b(;Hi1Qoi+JYE+kf%m#o5^{U%ucE|Er}Yu=wUp+Z}P|`^d-f#8G}T zQpET*VT+!$9m6g_;!tXSblRg%-;!1M-?F~jXVy{j`bHn)E)z1By ziU_1y+CON{KYUC(cJ2B_^E`NFsKqmJpk&XepXD>>Vn>oS=e*OLWvQ{_4&d|?=;t@Q z;7Ix6yx9R11bm)+Nq;O0k63tW)FsHip6cR)P0ym8V_!vT0c76Z&KMWkzdt|f6Sd{$ z>{ej6TT+te&asa0j_3<{fj9x!13Y`J0O_wTbIos6~c>i+2E8#ULMbCPez z^7!wK{*^^!p^EkNv^w#m1Jwb1s*~^K?-g$24osVxwh34K(tG!Ie0?p(&@TpCVeq!B`*giHfnhEN#sJfi5v4ZE$n>661*XrHlN z@~fn^^irxRMoi7wBf@D08%2*#L!~?3>8LY77G{-pc-sQJZKz4Q3<*n4-M!mbZ3|v3 zaLy>&334R;?%lfK=KXk~F=zVA8K^#3@p1k~Q+>gB(6|!gzm&p9aDON~v07x6dyd=L z51WQwUbtilc5^#)Hw;ga*uc{Wcz#J##v`C~IqLgk5sp$6Z%!W2;J)zW^Y`!co!?kSucoTH@WIRI z<@|uiJg(*ZQHF+^xm)du&|YFg&Gw1Z&(DZ+&+dL$osu@w#btJ`!JRT=H3~Gh%TW=3 z<;^ajXS`U@)wTB;*$>+}utpmF`t^3bwZHD|Th`F8JR9DBMPmlDD9fMnEt$S{J!WvH z85gOf&d376DOKF$b2SxhgHHIa^oi}5nURrkJEA&&u`Er@q4LEIpT#~zQZhAmw6@_) zt|yg&2yH|~k-&(R6WxYPqy68`o{8Km6FlG10RUU26vIW#+a#eZw?_6gd<}ahez1gN zaHNE03RJ0)sj9r>;F!M)JWg!hUN%YawO72V&MhPwIri0{{8$$k2NV4F#DTyc*Ly9< zcKnr>8kQ34nwj|b+oMvc;-_)x%k9AvC<;?c#X;!i8hyi+^_Ug*k5w ziC!1;M za-%{@UshV_&bY&m{`Q7~=<_Dm^~>dpzz@eyT(;oa)nDZC-QHjF*WQ&6*DUi|!VSIs z18K$SJI3x+a<0SI$olH+y*mH_!o&2QjIN^y9O9as$8xDFy8ta8aW&HxVTvJ}iKwSL#tp0NAuS`GP0pkIlIG!_W9`g8uyZvwxHOC^$(3 z%CO7sGiEfrd82~Y;+r?vF`_{Jr?ZzRnTcxc7ZC6UF}?zfQE7VFnb`r7pz=cnT( zv1Od-^M>I4?b^;75w%ntaDw{>UDwx$`t|^)^@j6r_FN3Ek6U?&tAHO}^%=;ZY~0ab zmP>q9=$P!KfXzAQ;NI=qPSrd9#_D1!8ck>CgWn$SKVMNWbwx01eR%Pr7c$zb@Ksd| z`JhvC&9vBy&Kz(YE8Q1WbIb4EHDDMBO95)P1%1^t$8Rlr=JMJt8(kOUC@X)|t*5<& zadijeKRacL>dp~RaFrb`P}O9U|HQB zN;;4Rq#q>4lH$4cc6J*K7E9DWkiXMFSV>DWuXiI4nWt)WETGn--gsnS;Ff2772P5gTIZmTZBx?7%xxB*Zt2%ZzNT`&wqrcRB-w9;RD zBF2W=byW?w&HT!9XlY5&5V5S)F#-}euESLV9j*N_a)%sJ0F_qk7bMpuQq(g*jQ-UmU@!P~ba z?}0zgLJ-Z=Z+pSNVbJZ&0Dt^y`PXgb-@xVgo1P2!`S+){)GQ&0=`;MlUFOYJVi0r| zx-I#qvTf*8KaYv>lasH$2|oVS)lKDJod2#PoN|n}Qa5%_4%4f-nVojnpDxFcc8C4L zZ_Dhzber3}ET(^{*<_Tmf3Jbi9PxQ9#oU0fva(_i9`y6t_%zzO`{$1kgj>2K@eBH2 z_&~p-f4ud}Mf8tne}9Ai5qI&wU;a;F(0@h({pT3ae_#MX|AE1OU;u*ne^K&3F!&D) z08{*r8TSe^k+hvckb@pyQ7qaF_|Wa{Lz$4bY~&> zA0B9L?D`!iWoBWqNm(Br*0Z*@KB4jRm)E=+o0<$QEG!I4iVJ@$BON)rD=n^lB2~4e zv9VFak^J-%|ITt{voo_c^t;GnAu@+{{w4em9RdI4zyE>u|GS_)^s2Y3i=5K5v9Zyr zwCktoKfwKu-6wvefzp_04;>mCA7_gg%(on4O+nrNNb|QhP+(9H%^{}xud(t0;i&&s z7Tm2tU3N+<`SsS%XF-s}(X*)gA?P;l*RuNhGtA6QzO3>FCMG5`{e^;sg@p?%OOq5b zk0DSdyn=P+AK`8C)%wpPxqB#H(xic&yUoViiwMh zZ+_!LJ&SAy-bApkqobq9mZ(EZOFK#=R9TW{`t$Sh0_7vb8bSmuMkhN{7dF;c?I`mj zj^w$==KTdrH^#*67ss&=X`wRaslF$H?ITWNbF)EyjFJ+P3Ie;j^-jrNO*d$fJmzcc z(h?^`$A3Di<7z|r>0aZIF8g?%c$u;^ql>P=fs!U30zJyUwP#(l08?7BoHMuYsH&z_d+*A+1{cX+ywHAWV!StZws*y8dl?8Pm}$@uEWV~>MLND~ zKy3F#XrS5c_+_20w8t%7T_<1A9P-ky^Dj=GO;yVrJvgXc>g*Ky^z-jWzDfLq$up<- z-hyai&Rgw*_Vt$iy_=Br^{)GCW@jOPZf&N?j%0T+rec^&(ISr=tR>jf~7b zb#~X>D=oVbKk`y4mhs4u7kdx7MSET6z+Bi_L+JU#hx5dtf3jOzedIOwK>J?pIx4}i z?5@3bJ~%j7i?n`_L7=?c^_cGF#`=*XJXRBJ9SLy3cI&$BxcZi!z&&=wC2_Rgd!=%; z%?gS*IcvD zeli&|nVefPW5{wh^yJqVIrj-;ti+BU6oSJamRu_#W=aK3QTT;*D3$$SbF%Ef|Cx2w z2k{YVSR*_J*Jo_fbt>GtitH?>`ttd%TTRfduPnLy-o4IlY+?exzpUm@#nT3jFxXU# z*fhDv_&c(c&Lq9X;3K~r%l-xT`7!R-=2v0QYS}Th?6DtEcu-^7-G5lgpsgZhB}Ue{ zn|(K-=$O~twtdh(8MO)2PDAW%f&x|gnPU|R$qsy!?-%T=0>nkGC|-uxM8G@mToNI^ zO=E;Q61PWzYOel#i&J5x+?bbZMutX|0iqVQ7bdd4#Ck(*j z%9F>uJ-;~>Tr}PTy=*{Zq|veaX59=CL89Etg1LJ0CXHv9m`_?u!t0T#*7M2L)z)3B z@G#Vw3||i_S}xbsnRnO=FIe#jy&&5Y&jwDOrEojqe2i$&n$r!3%E8cUBCu|lsB}|P zJGOvnc;SwZA5rI}G+(%0l^nsuaC{_1d3RX|G3(c#BH!uA8Z0?V63e{qyELxM_?5L& za?IM#_2DiF3=g_;f2TF|MCpmmxl|cmH7hAK*~DbmlGj$Z!2Y>;?>hQ(J%wlC+L;$p z!ZXj#Q%s$|=IiRnmYP4gvrFPFFV%9jD~zjne2^s&+>&Lp_4^INg=jBXc)0+o<#?B` zUafd{$|@bBdV@vW&~cDF#s1;|&|@iI`5MRJ{yy zMh-$s&9Nx;*9oxETs?_48mZ};0D+2ZaqZ9=jyg+B)k{J&*4qo#;P*e!BD;08c3M{) zCy6jLa8hJ9lI1E{y+ zA%s>nt90wt5?SZb@e727SlZk0eZEvGRE81mWrPU*{`(e9!@AxxCpH}$iHi6AW!et> zANQKuPkppC`nDq5K}uVO0mn#HEj;cx^o|Qef}JbUs88HNu5ZPnj!$iD!Y$8a{$CiF zU_&&cmc@3|FPZ2^13YCuL&I_Q9}Lg1OCzEj0O&jGLHh=38yxT%-BvAKp-scsRpJq% z;SGBh-~nbYsDx>@o-*@b=NqfcDfSFWh8)AeFo#?JWbe({qAKlCvWV6mm!er(3M{fKScqrI|Cij`pVS0!EJV@5bd+P`Tf& z!Ody6Kt3BQmPsJ(Ot8qg>nf_So0?JY%v0xVayGdN>%`-mop{Cldp3VdtokZ!I%ZT} zu&*{MTNpoaZU6a*rcuENVV0YuY5y2HDC;qm5Bl&sjpNTw?(pB(%;?(!0I}&EdQ~e83 z@s}cMQpFvVLxslYkoBk>*~m(rH090CH(J>UhZG9?3%nBlI=}{KgN|y8Rrs&+gNQPz z)80`Ww6=Jpm>|Q<6by`d&YsFJNFtnU)H})%5khHou^1X0KPBxGy1_oA398X83Gy;++W4`%Wf=rpP`SX z6+aIgU?`c&wrGPimpm~EFWeWWuK_r<;{uo>G>_QGg&!6bR)vk{j4YLUgn4ZM&zN^=GDOgmw1b`y%~5)_G6>q5 z1qeZ_YA&a^KwsX`;J)URg?Y_zr?aVTDDMYDKsw6aQ@w5xr|>d=<-1dsy1fIHvSO4E z8$933H$bR{UVBh2=+7^ZTi)g{T2U9WtT#_OG<^zdIg0q>jT*F4ZDr;ujv5O?cw1wN zzpPl2%~;;j8b(nIi8Dg)s8o=vw}LH#X$Qv-+9B_9xBWQ0QLlrk9v;J_x%Ni36=M3d?;QG!TYFL60*aVf*kRAB|J+`Fz6-AlpXx= zedukMs0=l{U%@D!Qmv=iqf)Nv%PL9oQh{|7Pi33fxy5ZfJ+7ah=hR9**_NpUFL#G( zIZ4T)q9-;Q&ZHtE;~K;H$tI>#tOqCF0%LzerR&4J%P2VQjyULH$$BOX+@|mwdha4V zO^l4a>Sn^}Q_Z_WYQ(Ruxsd0-e}Q$g?VkOIEQ7{JxavWCF#E&jiN~)>B zz!Ir|mIY*7-10?dNaBSozDqN$Trrwg&s|qmfkP|ZQ8bf^yeyqiO~Ct%Gio6D@y6~J;~uX1xm@8G+*hC&9NT95qn#P{sJ z+LXUPVV1i}U`wp1M0@@n1t(ogjq8bAAT|Zmh+ki+Ca=ZcfGxu9E!BE~j$+K(MHY|i zwX;Qiy}x&Inv0B)*)Ah2o$dm9quF=_#dEGW6Z;Bet(9|^iRkoc-&NTr%b zOC80S?x}*V!8V_oQcC&};>P=8Sd=?|qeY?1hr?@4{Z@D_9WU37TK<5b=cO-b)wn(Q z-{^{I2!c~(=nj>t>Cz{ewYDOw*K|93iOr|HI5r#2laSEUyL3NyPT_;W=0-dDlhS2A zH2?}c#nyj4f1(-$BB8=FDkuGI-MTv_Z+Xm}4>1LvCyr;FY+u9ndQT56uap6$QWK<< zhSB+eY?bn3lx4>UMFE|NRHm~3Bi#jtSlJ*&`of72Udr)V<}(o|I|bqU-cjB6)(8d% z!zOnXb&|onx5oq{)~e?X@?0uCOP!5qs_JBVK-ZMi7hEGvnm#QU&bQ6AJ~mRRu*^)p zWD6S@yR$WA3*)xAR@NS>EgoW0%MvrU)qBD+si49HVnvi{fHiEbW>mODmp0Zp<&uca zI_o>~>IjTo;sNvRWIcrGBjcy7DD}npyf(nmOxS zsw3mL9t((sTZTrT^{2d(U0b0-ooZM1JQ^7l#ZUnCPQM1Ix3Gy`=+!)FAPQ9m@uttK z*po*e)J6PRNBWy3Q8K+|QL zXj+MrL=y++zPFlq|Aoq%c1WPWZBjv(Yu{2^Z5}B|B`C>oYyy^;G4V3&p4C*S@GFyc z#<>)S@s|i~7%YurS$S>}fjn6sZT`uoH7jA5_Tq&La0cc)K|c94RXrHYW~Nek@xzu% z%_YR!GfS`eV!vgb!;FrL!KPINS4ej*qf3BS5`)whbU?W-Xf27y3Ga630E|XMo>Xpf zxq57Y*ePzubs}jXyrzoT@`0lXe!O}FNi~XisMg$rAmI+J)NLjI$jQq>(CHuU#?+@h zCInFF($4@~3eLk?qLr-R&9k5)L1+G2(CR+{!!XNbkZO~hdYM#0z4HO7+ zg=o^RhCJk@2#O1w9}vt^MHZyWchCG%IKe>h39?GJBW=h!x42eGPTyYt3fPw|4cZ=t z&YBpTyI!6RW0nz|svTx!?!Az5&C(WLaF&X4p3&MRWyw)TrHp9*L$Xy-(18{ z_MIPs4MJ4a%pt}CP;nw`7@QfpRpXUTbOAVwo)}!|XT?uf{ zqxA9cM9Fs|!W={kW6Q8bo`V^dJONiqG(6yb$a5Sfh-L4ogt2HDSeL^P>+{ne8-k); zZj82u{d^g+AScy=`$1(2vlO}~-L}6`c4Ox6e!^_i=~G*oC+-F)asS-wfqxjBv#&Ba zJ-)%h(7aQ%qd$MgQ>H_vY_EScc{R{ z#xwn_fD+(7qZ3Vuko$vOXCi|5xjW1U18SO1e`QS}mkq%x-1izShS_}}F?WCNAOC-t zNJr&YJxQ}7B_hrnfE^-r%Bl~h$#UUDTVH8E!b(z_?A_A!v?C8cgTr3RBri7Z{isnP!=L zE+gxq|Ua#&|)AsZ`w7053a}Y6z8xM zSjF9eWf4M~gF@?QO5bU1r$>)j64~l1%;-Jb{}vU7$%BFj8nxNf5_4xf>kOLQD@Q@H zP$wzpFMRLmK)w@>K4JJ*!cWQDMdEUMOlw+ zSg@PT;lw3xCYTmTV9<6LISl9hwmRQqWCey@?jlJXC?dmlU4bq<;C!AnVM|Vn+Zu0` zn_@cex-?F6!10Go2krFlJYoYZye!1>gnU)}0xH5l{#`o52)iDeIn{mt9uKju+HP-c zc9bhBel2xCp4)`NBwSRk)9gl`t%2rL5{LpygMw^gJSi6NeILUq))GdNYBT$RZ8~s#4T&x3a8y7K6Jg8*+NSJZ0XdWcdvI24d!hqp@ z{S2+JMBRsGa?j%u;C4)87S$%dbIyFO3)0N}K}iKsS4yxTCG5_2Lf#RLbGWF?#cWN? z$e}T58L^N-ay)@R-YTH5DWX+F=La%Eq;F$$Ly2V*WO?}d7AHi7npF3BzKR*mb6hTZ zkZA@AY-CViOE^S}I8+we{uSr|^>bYMdn|gU-7Lrv^olTFSf!=6vgOGCxz*fZe#CaD zJ3A;zzS9~rCPj$e$_Sy4RNL|osXD3?|4Q25AcIzAgvSZ=Cff7iF5dt2684z2^e_@B z%$(JF`5NV%W1gM_uPHb-OY~By`cqF{CW3RTr$hKCs&|=B{3sDnw+C?lCE3=)} z9ftFkl_bxmwQeNr^)R8s-oCpsGGZljzGL`&eqmn9?^FeNiC!vP`jk&0>JytkF9l7F z&oJVb_g6`)yu6$SLCH1W_wPgi+&^#*C) z^MNW1z7V+lr{e2y;juY~YF7>esHP8S4F1Xd?w)Gp1M5L(WVmH6e`5>e;F^fH!I|Q9a*Q{~n!RqtG0Lv5QK1;%Ul$xq#7Yk68gg0bO5Um)Wf^VRko_>jex4SHy zkx)41?e6xhQ*#$|&+`1we#6-y=FVg<;SDD#mdTFyyxYZPAVe86nh&3K-PB0GRI>4P zj89EUjmy07k>lcQ3lA*B2GwI9_9v<@m&5K316=9G)OI%;8j?q|!$_6sBpBd2D&XGJ zTB|QjdHz-l8`nv-X^Bq&&28#mFugLEC%n~rsy$T&BWoE0)q?Fv{bxF(A;|Q>Y$K*F zwhBplz#9&@pd(|l&ho6b=PCKrdOgwWDiv8h!Q;xBZ$U(cEYM|b+>M#sx*x5`1ddv~ z*KPh5QW{cDpSzDng{QYT!JEYlESO%+9pM74|SAd45(U?9I35HaV&a}98H^}=L+tEb# zfA0<1U`{O$%}?a(KBAdGSlwycL$<7FI~0mTvOTQrZ59(%wG&^tf)D>F3gT?2#Cc(W z&i3Gb+tPF}tVS9zNAlB~kS;==9>DeP!O)sg^0)VxmFf#{emf+Ac9W-Na0eDNEw?(b z%U#hr6RUfXeg1@IKxMmXK(^<%N?)0om}59ktU7aIlkBSaBl916gl1JkILBGRI)WiQ zs}Nm~?G5KD)_x;!s`)exy@zvNqHAj_aA#?SdwE8rdSd=AJ(;w)_j%^;2i^8x{f)zA z|8rfukCRW3k!-!J&MzF(9=MwFzzQLjo6&qMJcSODe+%#8HNI-h>nq6@?JDzZHaWeU z=PXCu)L;J^OfQA5R@L)UJPTWk`o|o_R5aU{7UM0$#Rs+9R~Kva_yMkt^}ULl?uEGz zw0sHe#slw0CPN7BHe8J`mtu&Kb%~~(fOBtXm8#xs(KX7*$zx`&l$ABcNBJ1*`?4*EzjH6XB+Pr9ZId=7AKiAj3|8tBLV|SyZ$CoMXaujs(aE zR`MT8s?}S!S1Avzd?gea1LG=GKqX4*lkWx7m=;Y?Z3DRaiv5_*VzV>6XKCV8dzK); zpIg3YC)d&%-ZUJ5HPR?rt$vT(S4wrC+j5Ror3XbW-A>wsCBpIZoDk_l!PSot)Rl@h zd|NeG;>E9*(w}QuvS3%^{v^sXZ+EAlDS0aGBuz3o@;c`~2V)OBdDqRw62&;`G;e^B ztK)Z_cWfUjDlg^iKHMobr^hncS$KZ*>Rq?p#z+EnmTFUY{ge8cvS{$;55UCdXht zb3j1kw0wdLpv$;nw7@QoC5P$zvxWSVEgFYe8O4NB*5+D%_|aK@w9(*uXd*ncIeXvL zOzc&yI@A<=(r+Xs^cUzIZ|t8t(&CPiQe_0nmkx;q)vPR~xO98^>O8ViK}?Vyt-MR7 z#GYvkQ4bXzbpDaoIKDK*rOE|^QW){O1Pw1mm%P*xa&)`nYN5Zu0cd;PHJ z|G71P`EM1>;5_Ih=$XmEI|4ehTKgag|6}KO1eW-nLcuR#)-A$;K|83HPgTA;`9RYT zd2kSFLnK@uEcWH(kz-}}wgkTKbVlsMmcTYt0zdk)T+E}p@sH6Y*x<|Z%=EDy@o<|3 z7mlvLL{?8fnRNNS5Uf+z*dgoO*ehLrjkV(=`yNQb8_l&8%I7J6ITBfA@Wg|F8nH*4 z5KLl`sSR8>8lodz)`!IJSFC$m)apfpo?s07vFMCG6;Sxm3Q_tw+$ZsN1s0cJ5%E)8 z1Tv2pMOybDH{}tEcbSq-c_UO7S?tJFsRj|Jne`Z;Mxar3%cKz-tCAe)2iTa_{ zcclAk$C0n20u>#a$w5WL@*@}^GThCQkVkim6fVNzeq(hf9AdE1o44_Z_pB4Y$_3kt z{dKgxoF}et+Mah_Cy&Ys7~hiX6SC7$epe{44=M{GT>Kfi6n>r2%;GX%nf@ANDiKi| z_*IB~ud*sUMI*t34l?84PKXq8a^*XyA*o-3p`6FtY<}1+6|SJ&%|30ZmsCkv2#HF* z0LsC@*D$T)JuvdJv7oI$v~8xtD^MVpJ!i(aZg?zQnclF}37&I<=wP$g>bMl3aWr3J zrXu1slRcZ&rEPm0Y8qa6N&YNH8|x&p+@*_YQ5u!X3#wIWiiVOsdU@mGb+<}X$bZy?nCGeW1fwGFqCG(w(d@bLq zcexzVB40$!Ch>xF)$P}v$gZRV!a5`sL8&*?=}ggt=WA!}I}7i1?lk{4>i-YT7MmfO zefJB{?D_NfM&ViC&f&1_SpX01H9S3!QFA1DfK-B(zz{aI8Lx`;XJ0|Bt?|OE@@-Y$ z10E=0I#?nqM#^uJk}S_HU7wCiUQ(&wmcBX6aja9meD`Gm!;CMl+sZwaSR|$LCnG_i zPId!BySHYUH@-3JfYDRTd<|N)n$+`?CCfbz3PC>}qpU z`-y%I{`yEa!|}dTTYCrMUGc=Q5a?1;ax$kuec%x|IL^J~E(hb`N~$}7Cb;)zzFcrsDpVt^BX%;T4Y?fdM zl|tYCJytNBx&1DYu|4fQh^I5#fGN~cVLLrpvByYm^NwTpMcmQsXCt+FD-($q&f{M( zn7afAr%L4h)1%u~WlR zE-O2`B2F<{rzu<%Q^jagG|XQYz>Ou62G>Rf)1Q2ead8n^U;J8VnzmziA7_M1wN4#M zvK9071i{%nkoO!%Z5?8?aL?{t92jwTad}W|ZzUTpLgClOwyUTXlUIZJO@3lP?) zECF*nTPqfZ2b!lWPcuYeXKN7zuriq27662;GTo-4!vO<@SA138jp&SaFY)N4VNV_a zbI<@NE=@>_uPD;(d#Spfk9WJQ<+yHvUG1e`04IN*SL~?rw`8Zs@}c6`KeKF3@KQRP zPBL3C^9#ZLN#;hQ&F|C3(=JaeL+~&A&4Sj7lx|kC%AbU16rnYAOI6~07;N#&QpEC6 zFq$DMP3tNn&xnAv9-75poDb7Z^`KD|i>RqQFwM{@AAWd5KOYwHEIM~S`*$IIlDj&S zR#**fT#CmvPv0o*Elmu41p2a-g6F+~h-`HGh%+{&!d8si+F?7rKOFR1@(s7pwmx%G zmpn89KVWDC{UTG+lex2=3uA&(>V{&z*mVygF-P$QoOa+;(NQBUV#*v*fX-w}j_gJ* zYiGr_0*a`j#`@U^?s6#`F4yBobYbGTSYkiXg*+G@jo*GhXo{v&ERACZ>!`%S!2f~8 z9LyJR)M|oht;p495BKccHmKTq@SCx^u12mMW} z^)CD*hZes;&6z@CETA{-@0nbvM!`S^Qe4iT4&9+P;)Z0l?&BOB36=gn)`fenoNbRHVW&*LrPGZ1zO;uJ+Q#VHfmSw%^g@Glx-w5tSZSrnuuj3Xks;Y zqHm>oBF_*E_F7k$iSaB5Vq8qCc^H?We~YoUT~sS{cl{ zZ_ZcSZszSq3_FGDCHDeaMfkdz@kfVQFPJR4{_PEUv5*#?po0p$&>EjqW~0NKh-))# z=8wLVbzXbCn=nx-#EeT@95^Bq3*z@oq+Gb z>CVfrEMh816`^bc-N~F2Uz?51sr1`iv1Lh2U~^ZG6(~>Jg%osgSJ98&sOM9+sfyjy zfBbxOf6!VXv#@}dxG9pYzoFV1Y;Q{|6TI2MBv9*UDjJ(3R}){u(HVMzwhBo}IOr9d zDT);9!(D*9-GnF-GlC``&~JG7i0Y=Xlt_`W42$qYq%&rk)yo^W8SD*j&|au8?>ygS z$+K!>!rMI*oDkUGb(p`H1CDmM8FV$C?Fno%c~gdUd}{i3T;|te#b`w%5oM2yp(Mun zWG}xICB`#zyDT^U#j-gn?d|%i0GXziAfbCq?>=ZIGx9iTDHWv^a{9_+tE$PiUmnAI zB0)mK&bv^Myn-l?*^2UG)#r#KhrD323l*>eF1LL}4m*+tBV1_in>Om{zFPL2WR-XM z(e`K-MQiZV$bi%&e)M&Cqu%bs%evOOHx?(ilAGt+D7%)Zc|Zoa-0%)}Rbjrcw-ecr zesq6H;i!P=AOqhhr%R3(=<~p(lVgXNL*cmk9E)DBr58?a#`uxsX06Y6LZ`O(72+nM z=$TA~Ho!PFo{#z1I)3qzb*>%n$(zbbNA%K!_IkQ(xG*uJ4hoE(Ny)AuF$kXr^KV1h zLSQgv`;wZZmk}(tLQ-`SoT?w>i7Y_a`?wl<+oG~eH7y;#r(6n3DgsPBjGu>=c`KD0 zx1+i*(@t7OHiYy5sXmRwdV1P;oPw+W&?0&WS3>d0lUa{bAah;t>Ht%+|9)$+y@v#! z<`?MAkcw7dE5BsJ?RjZ3n2*wXlhzIUvowJVM>7ew)8dZf-^mpG!5~v|?Ny{th{Q2TE}?`)6dby#lw9($R0g(5?Q65j*)gGHdY|* zd*^u~ru^@%@PkKJAs3C*ZL%w~^5xQFllgqHBQK;Y9K5gp0iz7n=^+X4_KvsSrJKah z4@#XZST-N5zvo$JM2fyRJlgOVZ@~n6d0~VY(Ny%NhLx82^)5-fH-FY;_N?$_qUuNH1AG8g?)1|fem3G zL|YkA0=El^DndZPMKY5qcbHqRIc zn`&l*34!fhhkw{9l!_*=UEIOpcIQe~GC>6-q38+{HcxfI(;fh)(GS)%Utcc%88$1W ztEAUmkgy4*qQ^a&KQ3okmBVmkrK-*nVO_fuHlbAXWKFaFk1+B8pd1Uf^T-YS<=CyT zX?p>!X)eqD7xI-(b%OdCiZ_h$E8x6d(UxP*-K2U!JY3V?2_ztq(*d}8np!X>3IfS?7E^Qzy>(*Q`i8(%JLx#5Dd1r169S%5_Z1Rei`1y@``XiT zDq@AW#p=ado<3NcuCap8TR;}*yfsZ8WRZDO10(4}-Jm6YuYVV&{S*OILg1r}P|UlD)#0ggM4Kr z=nzqKNuJtTy)iT-cxVOcGDNFdFxgHXfNC$DusQ%$`a!JZT08j#43^Zbpdq>Mu8acvC|omGUX3^AzpF~sn4W@h@7jfL&!X|iY8KX32@g&V{^!N ztEwxSeeOrm4C>NGZ;NV{qm!k^^G>|%eJy`?m`D}$1S2oFj)rtwwWe89S5S4;WJWC%a7XWf6uQ;HzrrUQY}@VEhrH^5Y!(;3@cm$hE}v`&EY zS2+H|a3GH+wNew|U|_IA`o3XCPl*anO&PNH)xzCD z|F{YOFoODv=mSOydSDw|LHdAnP)eOE$wqMu?_jprtb|G*B#!*ZGVqkJwXc&s{(7CF4r+7KgCpYPRp;4*w^kuy% zm4EpK1ow-c{H$=fm03jCilvlT=(8lob9jTC3L_zhv_Gl(ML4FJ_%WM`m${s7e|?h& z=czyv&RTwiW?4q{07xFthu4~4@Xwlg!|9Tt38;Cr(s+50yoHe@BQg*bg(VKPBYYe7 zwi&>;uUkJ@TQ59>5G^(V)W$X6aGESRPOMJg3^H9ihin|z_Qs{7-H2&?X3wNfNAwnt z$y*A2HLj$B6u1;T&>Qz!4P?XQT2XLb2&Df8;hM}eho*et9Z^U^-$w%%C&!w!51WO7 z)B2u#UfbC}R8fy|H+vVPR_Cl_+6@;cMdP^|!GHo{S8#J@+dptjksv<|nAJKh)4^{z zuXM9>yVC`+p%1Y%?}qDitu>^> zt$LUx+(CD_pYw@prg0f;n#-{yf`Jlq(b#h<%E(!22=1Y5s}LH$c|t6gldBl6qk}1a z^~Ru+J6l(aesD+^dh3?>Dt8zt)xn~I?YeT#lqpD+9G+>0Ui*%4ZuFGf*?f>F1@skP z^-nQr9FFx!@k|ZBjzmk`CK@ScnxKK84cp6eZg0IEkL?*wUQ|iw&yY#6vC7_ceDg~?>E;z*|dg{XW5*?&{V|^ zS|Ob(h#OtrT+U3KRX^n|2F@WcT2|x8ja6Qr@WcVUVjbF~*T++0b}|Po0F^7_n#i{z zpt8$U_vwAg?8PUNxSI4-M)Wat*OqudcN*p;kk(Jo+Vn17^=eHhS)kdli)jwIrTFWw zbPOtccNe(1nfIJ-iDR?!_3h$Gm)4CkKl(K-<&~tG;O|l%qOP1bbu0ck(ErNq{0m%J zbMFB@5~*rK_L=VPpdjIqrR#=`--;q;ixy@yHa$>a3HPR`BnI6eyS0ChcR>Lki6<8I#eDgQ{S zcc;j7ew@ExpW|Lv`b^>`ES_~J3X_$i)g#pCu)(X&9FsW?nR)u$r z@^%u=sz;3EE;`tAxl1coO@p6cnP7cOl=e~;ckIj%*LpP@&HjdQ)Nd~nFlZLUFs4=woe6pfWrFE?&Vq`hdY>}QgagyX8t%652hj; zM3Mw_!FuYS#LcxkMQby`%F8d!%?JspOSN%F04-=oE3oNFt-CwHD_}rIKkT8P>+jJt zlF`3-j{{p+dSN5D15tC=in5HXG}dEP@2UBIF3K~TDl_Vg!fZViU9Fz_;KLt6n2OKT zanSdvt%~U5wx4akj!4l?CB^5_jA=~e2RvK9vVU#UHRA~K!_vEmp5xFxx3m{N7hPp$ zg6MO+r~lcW1xV#mkEngk)O?O>1;4TNJ6b6FNqfN}vSclC)k2M9vZchX&-USFbyJ-_ zY!s39XqD62;HrP#qX&Y-Dhe7#Fe@MMU3q2>9Ge3Hhg_nUXWzgy6&H*yctk`-@wzU| zgfh-1^}MTny);W_c;Nfnt^3f0rHG*b?-Hh_D;hzyK$9HKn0tfU1{sG=!{>OI0*kMj zI~gc)n1G$>6EJvmmA$?RMmLEDG-_daQuWLnX9w)!ll~B=FFhrC_Imb{VC2E3s0dpo zPI>gD1BMCHUag>7?A0Wj15+IW$WC^nJ!{jVciBzIp*HD3i8HWAhw%X({C-VDiEq%S zqFh`q-p1F`Xu0AmOzu&Jn9K;Ts~F!Q&z(YTiiD)B{bS79<53^w9MzB*p2y0HbfXS^ zU}D86ujs-KnnZqb*qBx!M$>4^>C5e3(8YKC-I8M$ji6W?`tbNwBbhsbRryMWQc}Pm zQ$;^&pJ|I%K@8XU5DM*1C&~5ZRDJg~s0?lxiC-&vzgmf1?B_$|&2~q@-NAc47cs%R?yp7cTN@|HLdw4LwwYoq2v9%!jgGUEzKObkn|07<1Ti*^G<+(}gVm=*gh*=!A%1Og_p+M> zc;WTX4p2R}D4*@=cbn5b|}cII2HEtZP`zVky8)Yd7?_A6-|%@rrz8X56IM5kp#}(xoR^wj3G#wWGM)`$>bR4exS!Wla0Uud%W& zdqox7)FC$AkK!m2#aY=!=Nm&c&M=#H`PY_GEc$iGWn5beI-ndm&IX{#y<*xY?6<1$49fTCsnlS?C$gwKVeF&S$`8f(VGxbXsq%q{CePqVb${uok=?=6iu6;j7>k8)^( zP)5#m>fJ)&4HSSxHnOM2d056ay7};{-=nZgg%KEDTh+m(WC>BU4#O-PZlqT1cfTK- z@uD!t6(%?!VM0840^Mhvcvon^p4JfbBH>upwOSi)5nGVc^n0F!|H^ z{(EZC{}km})t8MoC*0mz5Pgth;@{(wxs__Z?$Jq6yndTgU?@@ATXPH=!TQwrq@C9xE$^I8_ug(3vhiA1oJAQ4>skdT8``TH=;}4=s z$=N+0=ucE>^2yv&3&@cpj2@wp{NrP%MbQ;^!@FKS;h4 zA&R(3Gy21V3W*nQ9}O4~=GL;qI&p}P;%G77N@oKQvwnr@4B{S+bQqqfh!`u0sI||s zU=j@N5TBplVpT@yFA&D@huafgkSn57V>A3c_v^r@o^bo|mLviwv^IWebKlHuA*R9W zB1IZ}S89#q@h58NViuW{BiBDEtp9`0gr5Aa>sWkg=O&dYm9X!vGHUBI`M#>Q%UU zk%C8dh(np?8ob2x21Xfm#BFWz8Xrm&-t?yxukyxIuEk#w=cX?`eeG;PmYF}SLudm% zd&_S=J(?lp)LP%AeE)52DtY2>OjpS?g2H80Xn(qC2K)(DO+}($?7Z*Z=Ux^Y>ba(H zJ$}v=4Tg%PYjq67sgT5bYNcFe_~2s>;Cvc_u22MOC>eKpp=|0SrkBtCp4zIvC!wQ7 zs^!BU;#(4*MoD5^zUE^^EOr?CnDKc{W6fX@*A;E$iaF?zvTdMO?A~D&*>9X~p6DetOn^|S|Em$P~2x%5uA+4ifixZcn z0GeS`22Y?~xamRe>b#$#aY|l%{kaGj(ByGmBM0*8-*Nmdqn+^W6|i3TlF&^5@Z2)g zp(-gZPWB$f>dR`CGo$*}Rwn>RY%a6<-g?PPY-OB{gTswWv!FcOVdj*dpP%Brdq+uq zmhyczgHGwj?Fm8~WCECeKGl+{U7B=xy91P64`&f@xHPb|bKE8D6n0}FfIff)u(W* zVzN%~!#q|zPB#=Q+l1_?g5P`w`1-B&{KA6xLjCWop@&yzORT$YT zsRve#qc``4d!DmxX?5LJ>VZpuC%+AiyEF79B~AMa_z5hCihz>L=5sq$%d@?U-_}0l zXLoVcrMX$hvXvM|CIDJ87e8Q2@C@WLyelbr=4sikQ@OyGk;~XUSJm8O_HM|r+w@TyjAUMci~SzSFKSN zV5{IU>&X%VA0O6j486vqQ|@9r_lYJmWqU!rskf9m{Ag*hUu}9&ua<}bJ&llkEqV#FpT0&rXWa<7HAg(HDM*ve@~KJsV!lEX)i$oOAzZ448}4kY|| zUkQGDC&~(W<=-(~OSOu@OhAidDIJkQ&m;QzO!q=_-FNQXi2@QH`DQ1DYl=P#T&>L< zytOM0v*RB#1(PW!1Q$Tv1VxZqyR>pv2M6v?q&w-~Rp4av=gO;2YKB?TM-fPG4oSU!DrG z>UVK58Z22K+yvhZ#~%p!@f?VM+R(d*>FeObuv*%g9u{ZEzh<{H(?}LzwQV>sBVXBX zGsp;kyV)?pNuaj2)_S}kMYFK7vvgzWB)_rpfs=nk2CF+~e3S{;Sa0Hdm{pwwDvQXa zkU7K2<=PwJ!0KmZ)Gg%9w?*S%DvS&E53t&WTQU9K`Vq518IvAs3?uXoZdi)|lt?%0 z&AH_>JgY4ieAQKy)W_G8_2d|c&`Uk|3U@*!gN4Tp+G`hSMHid%R%Z(NDAzV9tIOZN zvifcxijZgsF4QyAaNb-~?qCz=HfL0#V^rs_K|n*X>DKvk!XxZ#?Ch>Uq0)ACcJKWCE8V$EP6`e>T6&z~ zV_EJoNxkKBQOo}ON1^GguTd6*CB%TzwU2k~H^8v9JNVeS^F$DXdEPlDXYw}}m+^sO zhclS+BP#R7Ka2zz?#L@M%;+$l4aw7=>C6h{O`0RQ&)LiQ7jOF{$v|y_(>>Yl{3ace zJ{Ngx=Y~#Rd!`p^KM~(j;@>hy8*m~3ZXoqN|0>60-j}C2(GA`K zd?Wov==0Xw81lE*Rbs1C$G}dxz#kBY{coZzp2ulr5g18ZoTA8L^9{^YdeuxTX-)Og zrAsYfXTdCfhTiSxGky7YX1C8EPKAP$j^`QPoyU?Gti0j*rJ$m{iUem)Whum>^JOV8 z=14e!PyC+NUTF=$*IrKb0H+I{2h0lURlhmOt9MJorr!#)&>&cKP_Xat$B!RXnin?< z?uH8SfUkv%$(U=E!|WfAzb)jQIX5}D^016dXC`JPrZ3~aoT#g9tNp_oFHy5 zpFzFL-@9qA?5?y>kcbFW*mka3oPR z)hk9Ws4p|XA*Ry zV_V@rX~gH^?F_D65?@9x$Zb2lwJoK;^(3a(J-v9I+4~(od%e3ulh_q|E90=od=O4^ z44{Z6K^1g^e_U6e&ZhPsHI!$+>*;;|d}Tym6B$On-4_880AFZwnl7?iNJ!fbQDvLm z54DcKNwo??-93t$^vp5s$xe2J&b7g(pZJZ+aJzVHX8OWP%-6_7_{@#5(RPHqst1a~ zBcujzq$nJ{Y{4Ah_+)b{LiPgI(-zsO<-oc3;AuKSmG|KL1z-X^&&QkdrZVcHl9Hw} zxwqfC>vyp&y`m;>K%QPahv2 zc@bfN^x(_>7BJ;_d-3)e;1&E`+X;ExY)ol3hx2mvWPFko;h#XE9JYsoob`z3W`oX* z#KB-;8}kFFcrSu7Kx~Bb^_Asm2Acsp!-;zEw=L)b^W_15)4_v6Yco1Jx^j8vd0$&_ z?uXC#;E$JX6NGC?c|Y$I@^#m37@b#k z{*=C2QOexLv$Z`2;RZBuj{%r*6;5T?0_4*zU0-$woW)Cc#$|oEm)lEpxyyF8m~{TO zkI-aNrcOtanxeAuJ1hMkt4D?dumO|xT17dgP8^%~KN$(2(zV6Mdceo&&R&p`lHze# zo;oRD`T!(%Flhv+ZPcEiB0?T|!Snd@zeWI0JNUQJ$9_0xJF{D!Za$yFFu=W?*+u36 z$Et4raN8H3Ud80D!9{=DM^dsJ{+k=*u`8gjm~n|<{o}nPoaXC4`w`uxwdM6WL+HxQ zblsx#eLrv66+N&ky+_~`cuc#}gvLTGbpWomWIg$sH<6GS1^%u8Uu8F@v3RGeD)j(Pwup`hU{GPSZ67A$Zv-=)(@Z4@+%b9RWtV zb2na<7tWT1nA!c1B(m`~*tdK`^P+T}8yuT=W<~L#(aT#g$qMn7;6EP!1>Rq?(AtwV z!jU&6e>9kljg4HLuXOj?rO~DcX4r);-qr!Kwt&ASoIP5{Z$*& zQ=Q?h+}mZ9IJl8v+y$S`8Yr>@VMp3=BL8mK^$LQwI1})5v#aKPLIS2;bMa$8>;c;7 z_O);X$E=rca+tJ#IBU+JPS?PIU8Iiz`9&Ekq7tRpyPh?^tkQ}1ldZ|YfQS46XB5Q_ zq?Ag7AL0UYXiShJhCRu=uw8J!c6NE7*v`5F+!5yszgDx$kqcs-Rv_}Y0<_L|>;3V& z&{9jKSueq3JL7NVW;2W z;CsEel|)$m?BjuI)c?oUcYrl{u3@M3sHe5)X;mm7P!&W$pfU>xp{0t5h{_TW&?0*& zdxz>#Ta{rDkQo#K*|KE@tFo6&nL!|I3=mdW`JXrWBA);MzpK}|+C%bvU*7SI`@Wy& zy;&dT{LUgf>x|{j>G#qB-Y%b^_=GH$6QMAgJpX_Izr%y+`>9y;QyIOc&TF^Uuck-_HIoPXK5Us}}*Q8;G~@)n%U&9bj?YU(VP3FOO#Mza70_0?Pz|8T|Hrw8C1PyIgKLin^nH*bcGf3u3$`fbgl z!-?MX(T+~JU7Nphq7@Qlkzo2Ga|5BHhA9R&n~mw4bc5w#K#p z0r||Cb8$*?o!RC>g!N$|mNP)1??AxXj80iH#0Dt-`eo>^!>og6U?=i9K&r;N+P8;D zUfz5(cd>UgXsGkVsVWWlzx9rRBj;;P(@+0V#_Ajj84n#oc{e!L94Ns&Dj^YcL^~P{ zR-*sf()&b>kfTRu4P~)g&dv>7$vG%lVt-z&zPbKgHjoBQrE+YHXE8jb_bci7j~_UB z(pBq>-b;chI?=o_br@vJN@5UYq2`Z5ZtSi=0G~T$`2(fg)uXJ`$5jk(t2VY$b*(=a z36#~!zmSz*(73|vQW#2AFi6g=h%>2{0|*~DcNV%e`GtNeP-*ZZ)Vv1vdN0 zyr%oKzooMvM>X{ep(xt78E<*2AJ7bbMCat(@8J|qndyrSGxG{n$qHX5G4Y!EcmKV3 z>V2wq8_RXHpk~Z^w^QPM#K5KrYtH{|y}4-7j6YeL2v|PrYWX>%{|dMDIkDhI%l)bA zY1sn%%*O}VJeMp|T6J8l2|Y+s`q78Eu{_+UKzaBVlb_pts#8MAWBSMMzlTupFNm8J zAfkXbQ}JDT2nMp_j5Buo1dr4g=Z4B`^M=b?fn;g_)AWyp!E)?(p(V)`4{^AT-^?Qo z!M`C=I0aS_DoL}TlMp@TJ)}zQQTpiy?@(Fk%7hMqh2ro#mz}7X56{+fwdXvgkU=#! z3$dnL39noAwB3tWdfJt;>7oaJesR{@hzDnL2Dvaef+&RT0&dodL_#3E?|bx!S0h9% z$9uI(u~ggR;DYsjN~-l!Zo``PeQ@ zc1+1M3%RxYdUGNvvZ^YGe;5*+%s)TlL5{O^Yf~W41Nz|q#t2(bW9s$<$PBX~1^)wP zTiLJiy@n8+Zxn*x^m6auKvZ^*TD2`v#`D1#m`Uc;%D}~Z&tCQ`g|hAe2RY3w$br8- zbO=snGV5^1#wpO)%J-Dz!%_=_#~IzkI*r|ve1~_sH?jgmE->{FbV9BQpbkWr@Niun z6*#2`@}I2Wv)CQfBC>fxV>NKev*mhH`zk z__g;>_1QpdC6kp{sY|0Q5Q&3zrGh=@f`EZzAMBlIULZb@9Qx;8os%aZ1b($LEX4c~EtUon9uL0d@S#77`aRI8 zTyG%+?toKyG||`mo1z}z;RLG+?!zxIQD~68q`h$ATkX4;NUPF zRmCYW&YSJ{6kjYHu)JQH>d@S&J!F`;-{q`LrK}e@-Mn;hjJ577Q^Z;M@?#Q7=zA1s7CKWlAFb0@x_miC+r2qN&`-oH zz*AdHIlubT+(x$GlZz-dUU5KH?k7!RTpjk=z*SlkFYVo99*UNwZ-Fge(ZiF!d`qXZ z&_I@!D1q;adxySqI46c&G?^QcyGa!?*R%wyjeJ**?}I8<92_XXvU0E(*jM15%*$lC zrua(-Ecb_}uJed41`YTYu;HocUr4>Xu+QNxXcszW|CN`ruX-ncVbXkC$hhE}RT_&k z{a?9I$H*kU{D-^&lbE{Qlp+i@M?THN?AOliIKGno;s$STZQu1QKVSCHp`JoVeuy=L zg9VhW`Z%8V!+u?J;{$D0$2BcYd*p*+G!7it&Is^CUep~3Ikg>N-Fux%C7g<>5nhWO zhWMq1Uv~=Z8;kQaiw#(vfX!|9>LB*wFU8FQie$4k+r(2dY_A3MDfw)%af9?Zds&F6 zp%uX&;8kumDUQ8O&#vIpB>GKvp?`r;!2{)|omr-9-3p)g^Bv2n!nfVJR;RpAAyG}M z!fpUYS$koY!T1-%2ar*I2XJ7r4oUF)?;&&W{Z2^rB+9~lr*kxZ|NXdm#Yq%o17u>D zVO$^Ik-*5Il16wPksNgaEfo1ieUx*WQT~GuVzZeCs4}Uyy=yoSIG?l@y{R3tz zZ#@BkO-u+>39m-9)>sk-oQJ6ynk*vh;=_jy1)rR+d2-TwzB)KWwf)(t2P+#i0w}7z zuS1HXn|v-cHJWqFv7aaIyt}D$Q&X(hciHMD-FBx2hYfj&iqGv@GK}(&HG`NM(hX#X z4jr;;jyE+ez-|#}x-j&tV{RBpNgDa6)s7ZBKI_r~(|~!oaGPaHp7}0+0;h1#8ejI= zewapN_5Uz-K5l=nc$i+m#%w?iQ&57v#r2^LWx|dEr+%eFhmhqHsIY>}0ODZ?9N+09f;$Vctot3k>#JGRqPK~bX{pcfjTR)NT+y2H?10G8P#Lx{OjeG?!8 zIJw~gHl8x;&4LM^!vEd zI1IlAlFhx$;(^l$us}=SeP&?lY+lqlB!05G`cSENSKz2 zRP~<_XCdkVUU(KbIGKbHby->Ds37)#cZ$b!A9(#>VNKPgbO`ls&b{NQj#YGh25^p1 z7iU)<`orG)RIFU_F9yzHT2zW+vbA4Mmmu5dO;U$tandaSys?}tzy=quTsg3d`}kk2 zt*wBR_7*xWeCk)efB*h72>N&xAZ}(|s<4q+93Yxf7NNd*f=#^RI>2T8q#nd4cOQUA zS%hC_q+{R0k*)!)$)W@^d`m}fZlrD;mXcye*)IZ6{KvUo%pl_wurOw*YeYZIV6oEu z`E0S#1)0xA8i!MTH}7sTJbjF<^w-Q+fkE9_4Amwa=0o6_=vld%J#mmbU(1Q-vlolT6)9h)90{ zLbO0W-^tXJc?e<@L%_PMRiqsK_Oo+6`kKI8`un83w}E~k-O7G+iqu{9{4{*?f-g8n zxdn*Dt`AoJ86{_LaDOPfI@fcu{TSG5pUKI|vrZiQxn6FrGVhY*>*k{0S!z;!#qBb) zy-WWrN}q5AkQ7}|N5@ghxQ;focXUtz3Ty;(8p$R9Az|7WK7y`F{Er*04Suwu5JSuK z{PgDiB8gluZB{9oR$_k@zI(!b>7N6hjpN5{CDZ2V zdnRHeIzL@u4Zu+y)9&rK9%6T%wW#K9rSSyfXBvmuFq9;r_^Ibal#=yD{eXrherle7 zmqaS!P#*(7op2WpQw6YZ(GO?c*d z!;=$)HhR1+=gn#Bo32{=0iS;LZ?o=~6hX4%K&pm18kgvPk6sCj>fpxR z%G06cOFiNMqct9haUepJuvxz*7Je4+Tztn15zZ~Sw4zO1{wc7SU!A_A&xqXV^kmy= zrXl;CD$qN9fKB=sV7w8?7|&hYhu#defLboAxRm<*=zTr0aK1v!2Lu*=yjP?Wt|8j^ zqE0pEVv(OcP2#K_`eVHiTDCr5#q+?gT5YZ~j@>^bpyXDQ9OD!X23^J7-QBVa3&H~B zsmbqkT{_gm(sJEP-D*TMzB{8{P8o(jS5jERWMmx|4(2ThaDw95e|FK6r2LP;-Hw)W z+*!Wj-sG}ynY%hK>iGC!q-*4jEDDyekpd~A8dngw^YF7G*0DEF2je$aa;LBXQHzoF z|McvDjDsGxlw)cgbF4Bm&%K#5$dRtMzOu6!ZHUq&!r-Ql9KpWTA}`_SRQnB?FU*i) zd^wzO?H-rQ1>I&MlA>46gDwu%-qQLMe|TeKqlf}0I4ztYeDYG-L;g8Ytk>zb6Jx2u zB%&R~dl5*CC6`YLSQF9w1o-;rHLlj{TK~atp0KCla(HHcvAQI$>`yb3{*O_pKJb7P z{gvG~9OSDNBB0G|JH z$mVD%oV=3z{xrXcCIGc0APlJ=EJ9eu0kW|aSeMNCfwM`(zxRrEKI4pChHQj<(a77W zs*!M`U1{^LN#k+qMKk*BC78pSZ$OsQHcOA*-*@U!s&2|=udjIcS#h-N(G?ip#qd-K z`@UHxyDqwu7EKE%)X@xrKU{r>0(Ny2Y;_|WfUbnbmHt6S>NKEr6uIXro>Gy;*lID;!No!C`*HnBLV#*naa*)0A9Sq$%2YNq(d+5mRP zm@RqM%hr`Sq4-+==JV-wFvcp(asr8n^7G33*-xjx#yQr)zS;Pb1{QRp<*7QLi9|@H zWr1P6CN9g_N&6`Rd(G-amd|I@m1K9SM(%MsyLNY#y)5suOdM*z_WV++-*f8>0NNGj zYYx*JodaaRY?z*I#NPSpyQOJpa;2fR{@0VYd7F6taBWw;m{J1(YCT=qY zUCEbHug?y?rme+;JqO#pzS>33s@tPH{ZhCV5Tb%}%Gh{S-XCP7y3!Zf*q+5+>E*N1m4Pw(JyFVwy48$c#jyjg+`%tuX2RdEQUQgz zpbley_ToNo(B8O?KfPxKh$78<2AB)hf58%fJ;!^}>qf^EVUtb@6Bb(gP;Hzijotsa z!sbFdKciNNlv!8e;SgEBQ^7L43fpvb*xm)%RVacRA%HvM1h3f#P*OLn-PZQ@g(c0? zfZf%@+IEG!-0JnM?V7}y!d^)Leg5ffHhOjA8e&0GU^1IkmXA~IIqt;R=UPvn-%M9X z($Rt1NwL~@;5W0K1nQBJ6z7OwJ+6+fuCY(0w5Bm=B-saDAfxbvfV+Ey;y=E*N1o~# zVwPK!R15l!Kp3LqzoAS>wFMePkyb`03d3$QcdWzgikjxAq;WJdcCn|l(&wFL$}Lq3 zG?!WHgVmbH?dmV(&@g~mYazNNM!k%1S($pZdLCeo+@KK7RPb^$ub5~69!Z#bSn#$|8iU~Kx zei<%8nGj$FMrurbt{R*~Vrk;9ftu zTW+OuJ%!sJ#JDbie~=?+>Ic+W%hzX_EF5L|hgpR!_o-$JHXdRaRh@GAQZY(iv$0}8 zq&9m-*AY}eZ2^#F9>h`4K}j<$hP`u3+9vt?FqbKdYy-=fh078=f<031?rejGBd{DH zB!Mx#&SaG$N`No?`&w2P1+d%$Kmbhi6}xttdiJ}`^vI>ex}4>fFxLCc1iQ11=^kxA znX$f2J^jv9fAzb&ZR_l2_^}r{j8EFu-U5MhtW?0*w%0}&2!2)iRm*p?u z#lnRZXDheq8X9g$tiaYqUNKB~+m7#QYDG!0Uv1j~roPXmUZNOq1^drEO^x6DBhsko z@nIZ#_`DKh_I!%5S`Nu%hh2l^zkCHY9&w)#-$QTL-87$>$@J{=MDT&sr}qTUs;jF< z95*5gc=nw*tP$KGMPSrV7j&5wWO0DVdZ>0%v{?D?IWKBvn%}a7@5u9hd3G)~o6WjA z=(M7RyKlVVN}m_eiWu2kqv<4FszrwJ?p^XH_1*&eNQ)2^d0715XEVLcu>rr_nYCH3 z#b|tDn3fw=94$*czFTD`rxVqiX0^aV$lU~$Goy<3vpLn0j3XCh)5cH-FK+{d) zNHby6RbSO_Wfl!A*g;x}HONpji9nX;`)~+$8?IfEvST0Yt411&AEeJ`PnT#e_1Mj@ z_B@UTy?4~O{6}R-niuyKgL89*X0z8p;=$-{4u}9S^N1#hKFOZ_G9Y9Yt_4}9NgGEZ z@bWlVR@%U-SCw|WemE_c5v|COi3w*#sVhJ6fq83*@LNnbZ=^<=$1A#yj^vRx96Ibx z04~N0`~H~TawTK2H@#;3p2 zGAu0o!Om!xxH+&R^}*XNUjs=s^J+NQ8xSoS5Xz!?g|yIJ`o>_TtZdnu$vlb#WtvaQ zXy$@TYcJuL0zVt)2D2x_Sps1qji>YjHeU-hkJ@{Ru`SItnDWO7IXNdvLC#7pcLJn@ z_m>-2xlr7VuFBjz8(Y`lQY*LQMkBp&p;CHPcM(mAz4%l!N^*DM^0!0ixu(WpWix7; zR{O(v>GK(Egdtx=L5hVi%wB|N<>$hK#34>>fTeB_9>T96iT(rjwEiWIqWSVtmhS>} zeezLcdahZ&L2dJH8fmqJ9j^q2K;M&ua7S3$wl+33sVVWs;Ur)_ZNV27!gLX%Kfk!# zC^d>Pkg20*DbUklIX>u^>r-9DUQt3&ouMXfVi6?gGT=Q1s4cA#B+U`R#Sn_qQ}7$d z5Is=hK4svd&K?Hb%6M{<5ey=4+`C2zeMa%l#m4H+88$+5)#wGp(~nKRPLTKjA%+a- zT9BdJSXC{pql;f+4r)KQO5D3lA8^P-6BCKb{N9X1~9W%JUF<<Sa&-0VBN%L9jf9@1KV_{*j)NM^lD}#V)=dYqssmL(h_B_V9Lh%8Q)Oom-2aMns zqkh6^KErP;qB=cyiOZc=>G`pK1IOd6we~WDb%*q#9$I_6K6Yg=)bk#cl%Hy1wZ~E%;fcbj+4o34=D(vtoAF8YZ`F??9 zO2&JB1SLV5MD#vVBx! zdMv62qbN>1@h(IW3;pVm}`{mIuIJt#)`+{wOn0YH91M@;BD{q@S^!l$~hM1 z z-OT(KJNEcdAS-{ISu{|x|FaPxdk&a3Uw=z)OV4#w7>}k+ujQ~Zdt3nw-39gK0bghI zs)(8JH^qZ~)$ijj^BNShLkUb?crRxPN;f?E(u;=>mQkQo)GTs)$Mc-m)gdr!2@nEW#o~D6%-XfA_H=>)*@#LN}5kyyxyj*kaCicEI4_FXM7H{|+E#tKLyglP#UzKAJ$x#*nB?lSy^oJZpD0R`@cgoCf z?%gzyA&T?{nYO;ZSS0QS$L8MzT6lO|3b0tfVw$)>1Ni3vEm?M98#f`y#_5>Lkh_J9 zch#-5_1PJpv6nMGkt`>76r~SAnYndh#q0C+u2Y#sR*iC9=U^cLgBiOp)g`5^t&QFq z!c5qHaUe0f4y2P*DJQ#gY}oFJ58D~)SU}b zcX>$NC;OhV%|th>&9g6|Py$Kd!RGl(yHB=5AacNamj@?!Z3KGJr!q^pq1PAaGgtWs}6Gbi8 zK|2!(hEfqCIb?4s75=UOMnsqb2ZpY|@GGubkFw41@>1F)^KhXO?dIo~W-H;*Efoec23$)U_P-^1QW1AxE?fY;cQE1~R)CJT-vKbBm7F}_D_6@z$#5ufA zkn@ZV9p;L|=6XYZ+Vn7>FVou!&n8_u#NFrF1DA7EI6cjn76v+Y_{N)JxepKdyc+ zvZ=kcDRI(Jhk%M6nDZq88>^PU4lwErD~Q5TP;MqMiEWdXyC?DP=g${^%t_05nnlzK z{xQ_u(41}P0VzV@OTfw-p;1_#+O&akz&oG}su`HR$B1!;sK-OR2z#vriKy}$n!y!5 zZle-TS#{(*T10mM2Yopng2j7NCY03+%QK)WftCA@v=jDaHIt6kOCwWV_z0eXSi^AU zSm2_|$Zr0}sQegr*-7!MZ?0~C59DGl;-%2_lK5&oer&!vCE36dT+{ymD}mH6DnRoO z61?Vv))zvW^)104Hc)TJv_3V(Of~h3Q*QFV-<#6d*p*Gl5KGTa9~nBlnqu7*%#OAq zzBuKkWd{poGFK6E<(9~-BNr@lcV8R+#QvbczqZoHmqdHpV=i@E!_qf<%F@;Y1Exmn zw<%x_`57z&_Ah}o*_T8JQ1!$!PU{y^mE!1$%;^|dFf7RP>Zvb^7{zmZN#L@Xvkv-& zKGhz@f1J@Zu+5NSFRVjb)vSTx?avj)sL#zqv@e#0b?6gG0t5K+68GcT&@Tx0^0zNP z*-~~&3v%I$QF%9P^*4jc3u}U7vW&22gNwjk5+@$@?5NAC_Aq^#-!*;Se=RF4(X;0T zJLkCnMWajuX>}UmV3D?0CL0D}4pzXL#zsf)i_!U^nAvTsNNWN_pheBEl}_KgCenOtRSFz*WmH=j+L{qbNYB*_+*zRp&`@$&7u~ zdjy*YgD>XcR`KLhOh}5BU2XK+z42*v2KJOt z@w*OTO#(gaeC~1-%-HAOJk(*d3-Nrd2uR=&T%B`hwJ5-T^B3~f%U>P8(i&@#%^NpC z=QgB@&J=1cP3L*Ar={?V{xA`)V9u3`PR7sa4y4t2_WL#ZIW9M{agY2jI-tIcEgk(B z>C!fSOuu`Fy<3A0KtLs&?4|IFu8k2EheaD}jL5#Z(CpwcNnDkzU6Rq7#dcgKU-ZbR zPwk=@que7bNdgkMj|YXCt?b>|30zC(7xV05^S*WTfN$e?nW87~!RwOz*0jtnw#{$h zmSZImCG&OSVZxAe`6*9z=Z*Dp(l*E5EgHQIX!OS_^6a#&#Q^Pt5vV@(GwcLGI3XG{ zmo0@TMRV;>3`@@)=vNvAVmOII3@2o!iI;Mg3vM>}y;Nhva=@CGe&N~D)$asaA9pCr zv#BDRdT`O$bye|yY4F{jVkEjI*hyAG=2riQKx*ax65r?fEy*F-CM<*TMKk8hlCc|W zqSC8LQTpAL+9hM3Q_)KEt(Bry>-3|dV%N!ap#Jlw#2DQlH`*@H4y(dDsTG$ga~2d% z#K(b?+5F#JcA)3hB&caZqWsw6UQpA}C=iByw^x&!OHA3z=(UWF5D6tSke>pzj<1J>F=Nts)?`j#Y>0b^GHv5|yVPml&MnMY2BLJDjL!F~_n6|v8zrTO? zer)(I-$j(CIGHcT&`RP(UC8>#s^aw+GqxpISwA_5()?K^Z>3vWG*! zQXYv4@Mr@RJcbFMy0~8N3xet=jQ1+}r35Th_Da$zE21sY;XsX-yRNj*YMlRNSG^!ZNg@|P`O2)o31$h8Q zB&DVvQ}J0iQ{>d&3YEDiTZLN18aPCXKAqQWHB$2kafS$1L`b+1f+JQZQpO-vJO@Pq zT5CNKNyl`PpBrYIX%eCACdIlrUPJ-&*q00Ay%sYMxOJ-=y?Rca$;hdY>=L5?y+G?P zcx#2jwRo@r?SBae8qB_v3OMoRE|->8y*cHiAaIEwFMND^x6)BiQwC6)jmo@kBGUZ| zyf&|@pBJh)N5ZfgP=ZB+R7Ppr&fN}~QrMUTT3#J3d#7miVLWZ_BGHeDU;R-C{Ym`0 zF@jrQY`+;8=P2OL4jY>LyUPRWd@fq{ z7-|_HUVvpZSB)b8=7p$Z|Ks#^6XWQs~=#X6oy2|6SFra+gB8svx#M#-!I^UG6z9C z?NnfG2x)=O_|wcKKkRF&C6o|?oI1_-DKsNX^_S$A{V9mc9b2w7 z)B>jpT{!~zl%N0*)MoR4(tII_%7bN9_#4{u=!ys*jW}e!v1sA#UA3OJ{?gHx`WGkJ z8%#vCOi%Ngl*A!4wm|vhvVXI_p=mfo-yol2Xo<^X)_X8<^?%rQYk?Ng@U!fo8z+PW z^+76-R4`Vm@$qOUnwu#=;rju<+1Cunh(S>|`W&{RoE-)%SMGGL{%@B{ZTa5x+)wER z%_zdHcv``7X1MyhD5u!~9ga4vYZ=JzQL3y6Rf#fiPft@%SQFNenKnTUE~LSQJ$C89 zjvW@^sNx4H-b+D>iqxu``bO%eB9XiKLJLaW*j!0vE=%(u8s~QDbnt;Fz#Wtz$A@Fm zCZN=G6W;!aCeRm;!_F%m^QA%cB&>1a+OCSFf`@!^Vvs($M|EHedqs4>_@!|5cl>dT z2QlgMv`@=94n^0B*dfDl6Idy7onR>*9#E8qEZ2oN0&;{#xL-H|?Y9y^$im9|fc#_3 zg%Dz^s;Y=4*oe=D7#rV@_1m!Uo@?~hXS~){kr~-L#WucIyJ6yv{t9rKWZCRLeUObB zIhQZO5+IV8>tiVT3=@W)12?Kdgh)_<*(Sn~9S)YSFX%K=<5N-$Ed6~wYWHkfq(gp- zy);t5iA+M}vpf*hMHH_rz_NkcsaWmEP6k1w2q$eCEbrTRJ5@Zc1Nn)4M$Z#qaJRC# z{N1o5iJ;LUYr-a@S8m$_^oql)k9R@QlxN-Cga8l(JA=~4vI~n52ZbP|b3z1mT1G_{ z@j=KJDIG2o$!n`d8)M+GI7Yo96qrc^WZDi)?N0p4ukrp%-H5!I06!8P6N6xi>rgIH z1JK?wO$eo8aH(nkzbw1UFvhaCl&}yfD9h0#2W+^&5ps}1JCsZ$^Pc$Q_oiQlAA*L@ z5-^Tt&_L$`_5=t$S`5g`fUcxI3xmm8Mv;bsdiSRHHIzD4Zcb4jka`kgFQFHZgAOPc z)WI+|Kow*tWF;U`KuAtE;2`N)6Hv?rwQ_}5U;`RihU-fcHrG*`h;Cq1v^X`%YQ!rp zGXb*5;!YU*ruH^t5!F7TrzoDbZ<^?&~RjpAV7 zwt=oi0CK%ie*!;x3}vA!7yUH2&Vp=&KHp)Ek9g)Knw~DqBXr`Y@{;mpp?EaEmyK9> zG>cxt&#FLr1W1x=F0Ep1%Svq``NQaVx`t)bKS}xwh2?=mq@L%#Y@29T|IHbk_+L8y zle(X+qKz(OJzzrXU3JF{sHBRj=lyzQG?nbPm(w)d>_-np#J z!8O!oAcV=k1%qESN#_9;3<||d^qE4~R~@@aq^M{*%_muGD)($D3PXc~xY+jmD5@0@ z{{WMg%iIKA`-_Y93&j9rHKRi}p-sWh3Vk=;Nc(d<3SqHz!CP^nB8;HF4%9zDnKOhz-}?e}byq4puwLIgjbxvMMZsvs!4X-u*bkFwo#|1?|VlP;^wN% zU6`+z8bhZkLVb7ys1!vsL0!@z2dniQQYe6oydhdnTtY%ZKbtwzyC8~Z)cVcXo7W$_ zci>XYyqvUU-D4;|6W5dm0B&KrJ8$GPdl1x2+}~~Nbp_Pj&@v{o!1%5ZGc}`UfWp|0 z3)~;oEdI5hEMn9zM=gRL87jxIP{{Id?H-Hy5wcb$*bT4>GN3O>Z*9zRUZQ z>}mC*LHC`&pZ*hci$B*!_VmpnImr|9om|l$vhC7rjbr@nRl|}`bR0R6_2jwyt5X)w zs-Aat{`=Fegjdc#)&2Y5HDWdLu4Mdt_`<%MmZJ^xYbty8L@vkrj&3}#j#nOw+lFiT z$Z-2t!ge$oNT-QxRfaiPbq{|128dWHWC7it8+vC{;%1A~Z$MeeLurd6>H>wD5_B%I z#|VejIjpAV?BV3h8~z$FmN|5 zXfa69qVC~w^Sq~_B-iotie4j?Y-f@$37fES5aiV7h^S;AO!l#2F6@cU$;O$Z;$)xW zIue(SYW7epuQn;!RAXO6O<@}cotaC{pha;3eG&JLGsv02DnY@yCA26J;q-ADv|es% z)DCHqx~}ZdtHKlevyz+~=FRfaeI#t4X9)_H;fT;~eeCH<`D<2EwVs4!w}VsfizUNE zgPP?{mDr#_f_vIA5NZ0h!*&#|h2m)~r?1&xt7Jm+P6d}Gxsc~Msm3*g*)do(bL+cuX?*qEB=FI_Gzu>y?%a`~A(bdbQ+Aj;UN0>ZhI%yqgqXkJ~Z;f7WqLP4jP zUNSc)>yuIH3YVHznn5~aGFzy5I6LF$$i0ULLYoEVpJI~|5N=ms?7!BA^HP8oW6|Jr zMNu+G#6P@8#QQzjz%w__>!)`>ECE5gQ%5|CC!zKx?)B}RV_~HA>q`^@6G>)49N4e{ zhr=Px48p{-#F6J==j-IUuH@;H1BhtFLN%x+5zK-mluemz*t1^_H2>s&1qJ4f`mj3= zzcW7-Gp09M^R!YV%0LBVv!@SO&rL|GWy8Y74M#!SRtLh1M}V!1Om$|~Ye|(q21Wp4 z{r&NOEu1e05u8cE@I?)Qa3Er+E5ji=j*c@{)sc``q0Sl*I9wrK2hjoYd4+a6DJ@t; zCh_sLVm9?L>Ex3jZ9=IhT3#NfLfFCYgP7%>uX zDsVX^8APr5PCEtMT9UgUkdn;7()r5LA`f*7Ro9M(ja3@fLivd-xlyIE9$GZPBXGL% zIU<;@XCp^weLzynmuJD@!dqX~%`~6~#l+WS+}(d?Jt3~Bks?s2%XYjyg6Ox%g6Yc@ zFS7^DUGIVqxH=JeEClxQ%<>sb>iaqIX($)1O9FrL<~cii3^%nw-B>?7OHNMqqxn+6 zvFttn%STi5#HUX`N`y#VemQ7rj+G1y&YV2yX}d{tju*S$KWXpEcsn=IQNVcxV2yBv z<0#S-gltQ=R#bKF?}u(7P%7N(!@S)F=7003 zJl{J?+e0a4;|-3e>+W``ij_ZheuZtIA>+VBt zx{%+oE>*pCk-vMFXCij#^vM)_Hgo>0-?sP!l!~ZS2-U-a&JDlT-RuJ0nIbsS^g+i) zFI!vmK6f(gQ+m$D%Sx7)cCgcl1b;yGf6~moUQ0XxRTWXtevUX&jo-Y9)L1SUeGyo+ zJXu$uNKcFEZn-Qm!d$Oak)Gd%(c&!uu=_%x{Bq>4!@)J6l?@kWpwqh{wM-x3;7QKT z8&%1EqiMWGWp<_q@fQE=bZr;B&5kx(P%VYq4QR+xTdlg3gIXDC&KG3+OQ4=mXPvgQ z&A@Dv-;{!m)o6YFuI^2tNs^S{iCp%8&L9A1XCWB617I18AxsYwCav|2&BxLFQa--{ zt)XB~d91ylUvVhcM4KE>{4Yng1`>~=gYmym%7!w&#MHv(n^fDHIF6HOURYI_A8iPk~yp?Yy#M2ZuVRy=#b!qzX9&zJ^dcU zOHg^>w1|;*I9k;+eS3OTy47`m3*Ccw+KNhU{a`#h>a#(|t{S)8-lO^i6*WV`nQD-Z zM7h9she0zi*qF)FKA3gQ99Rv~W=y49jdp}jeakSP!5^yrREz;vJEhwV4|m)TrjFll>-iPP2Iy6zUA;GES+`AUf9{NrXX7c+5N*@U-mB=I(GPUY;1_@o4inZmaP#-t z&zx|C#+XENm$T2aQ1`6#TI$~FBxlZ`x_!s;`Yj2+Y7!wk;*=!?7Fx`uxhuNXJDA%VB5gyV>`8d1&rLSRBS}q}x zZHpkJ2)54Nevzggwb1`F7KwaNH827-pvb?R>Wjjq)i#n-Bd35ivvZhB9A#-{HIq|z zF38MwCAwn2Ci!fEab^T)lFCz4Q#F$=dH2#`EtV=}=j7b9Z@1Uy;{&>DyhXSc3LO#d zN#w_TfThX&YRLimu&*@XUC(a;uSGS+R&E9)1I|j0 zs#ci5u_mT$GGQ|YVT}o=9&l5MT9VLu;|dXsqJzi!9Uik9s7uZT>p(p$#5e;&hf4s|n^ClZJ|*KB6N^9||eN z4nS8d)6E4PrY!0Wnbu(k|3pVeyMSy3)fP#lA}KLkYIG%9p|uNTy6f9+vo~ua{2DWt zSi}HslfAz|zz%fa2nelur#~RX6>OcTW-#c`*T*&Rb*PrZt{@sfF$I)C8kb07_suI| zyf<`t*DuAL`=^TX*pO~jaz&G)1o~^{9UVK^!FkQ9F~UeqA+PKK!U18;-Kr~tsO=zQ z+8Q5&v|&ii1@D~&nHcvBhS{{l#>bKmnU~Dvz#N%`!NK>DU-i1N*(sbeqWQN=n3esa ziB{upn*idn#1jzxf-yDw1%<{Unjq-umKNcRZg+0Slxr1)*riWXLrK2VS-hqGB=+dO z6k(YS#ynM%7#%%CAfxuNQ0f{AHXrKy@^3gmwya5nc`Z1n$^PMiR{1$O$4KQ%k7SSg zCWeycKhC%g4fL`-YG4r%axJ%@r`RK5RFD8|W!&wE*I~D6l2qSOMmwNRT=jZJtbK70_TkIukAsABWk)py1joMb%g;JzxT+w57R14 z6$3s{U~b*#bdjn&K2j$}w`PY!n_^_VwEUvGlLZbb04>uSG zA%xAZsH=e_NL&zif&TsD2cTwRfIG0sTV_FmaDQ;H9hc-d_e`VOcaA+gCNp1kx9Yde zObd_3SJ(X6F+rs=wYvi+@d5}9MmBRb`=Nlce21NT8wnB!JQ&*ztFdN43Q)RB903is z{X_|SYH&0Y3tc$^aE;#H6Vbl~v<|4d*4?|u&hP>~-|Pj?hbf}roO}1`ag(2HhJsn+scbP36$IQOMcZd(B9q- zRd2$YfQwLM2~wM51w9Y$-o5*sfRe1HwDUk2w61v037#w0ZoZyr9<3%5o`wbqwu+N~ zv7oq3=&|_KghW^Jp~|&k2%sCUYptCx)nUI#LV3x|-`it?LRObwG+XYi=4HVu`PX0@ zHYTK|PMC6Z5RU!y?5sM9IX*qKqeIVSuGT{4g<%)AYvtO3q{gnmLPu&3m!dzocr(i7 z8uHC{NyhBXHEhx|p+_|5gUcDvRf1gl>3!L&>+T2f0n?%9dOTVBSva03niV!d+m5;{ z?DH@6H)ey!T&raseB1r0cd-sHSY*Pfhl0qQeiDT~n9X19JP?X_N1=^zjW*!R zayivUWuis+;-7g2=KKy$b(O0|Y9EM{jI+ZQI!k_e3)!2Uf?hu%+fxA&&XmGJd8B9q z+*zKBSO{e}x%MaofU7K%hr2x9HH)NQ0~uHG_()E7mM1nUEeFu5YI%gZnl^r~ao`b1 z=bsyAT0w(CZ$MK};p^Co#Ne|4b|W}@Y?t}UZvkt)6f0_7xTPUhV)UV*u7KZV_3?za ze?Kg;g2Nh?6oFN2Fh?&zEZLE(g>#dI$T&*T)e6-rplViV26F6S5R4A-d2;d*Xi+w> z@ZaZ9A6}AC*O>hD=|QM;f*1?fU-i4+9pjY#%^E~Ur@_Q6tar6N_?B_gqOhJE`?F75 zGh)+?_b!WnU51Bsk*TJut1AdfymP=LLumx-O!#}2iHW_}OvwM_$tpbb3 zhDfP!y`G>Nxr^pi5jvyv$~wS%RHL<^qL770IyY|I$I*1Op(cFgO;|9J-~n|YTGZ#F zg(~+DpLvEQzwWS;TYAPGbhYly;o3WB&3a@vL)3|_Sqqdm{A2{Jl~Idl)DsT99I!lv z-QzZ2f>lZ-W9Fvet^sp42h%fCN1laMUq3@~Vu$MqA`tyNzN&RQQ8;E(5LneC=bo?@ z5-f-Wj@aiy^nPcw^R2EgqMZ+uLOra!fNfq0)CGNe9Q+Cszkl9?hV|RqiBMox@iv+L z_BcYz2ffCzEMH+MNh8x5{0WazKOgc(Dmr%-mvD*|T?lQ?-UGA@WeU(-J2~e)=n+zxG=kD1j((JjETRC zpa{^QLcIqzBRiM=$7aS#%is0|zsUk9Enn!Rci44{;FO}FZKNV98HBC_GSF5Q78S+&qn>#3^H9>y z0cV<&146X%)er>OXt7W%WYGd4q&di{YXyiK0e6TYSE!f^0=QcS79=Ed-XL4A%fd{W z8;-dy)a)d$%BPa=*`(s3zz(Q)r=dW0{WbW4NQgJA018EKiR26;dvx&gFO}G%m>{xl z)wzB}-qGXW6eZDB67PPc#4U|8iil~fjo6`~3F`@!Gr@wq3XAi?X)GN}xb0=PS?BjL z68u(5&sj-jcL&v$bS<`rv*BWyQ(z4!9yg$xN7<3@fPmNFpS-d*7kKsAF+kw%>)n%! zX#Z#Qx9HW7-5aua@nz1~4J24Mh_a54g4pE80}uw~{`?Dyc>d?I%AU1QzUbzlfGU<2-w1ikI&|Nc>^uORsLs z4_=)+AB4a_h>)lGupOjHj1lq|G3?ub?Bb>N*cFe-S8iG9*g$1!BV(fU11GgjDDWBz z-_ID55)!|@SRhyIF!buX@ovnY=buIftpSzb0MH!4sjjrCHuLp2ZF(YocuHP=|GaBR zznn!8josM6CHdm+e#m;jiRWM5;pg3R^>cDS1woD-q z(8@>HxF!*}D@jn4SUhmYvbr8pHDAP-I60s!w48ml4m~ECovF#DmZ4Nve zG!A(w5$_7U!JK{kUvK9b0ogNUfd<_bM}ex+MC^)iE!5id`B`EM)WZ>B4?vS-P(Hx= zrqA`~j)x(oP1;0YX?-4yAqS$_3N6QD{pM?91hPR~4ejDDWg5r;I5zqM z+w%FY{CqJkogFfHS0U}*gwmO*( zwjF`mtzZolRGeWtHHmPxf`ZOP%n+D@T1todv{-9b_rT%R(W%clVh_@pSb?IRC=BM8 z&^EjWpvL4b)cJeJ9nq4)z_2VDIAEjc_OCAdNhv&;ne?0dE5pbuE_EN=(%gGl;t$$G z9U?9oQSC%wfZF?F1j#@J%a;$pW+?QEgC3Y7nm||O?(by}A8w`}S{y+mye+sVg%Hxj znHt2qpg`k1u@6qph!IFPNJm8vL}BQd((O#%fLlhIt-f(S%nvq0CLhbTdJ<2icaAn- z$Qd(5XF9(B-U1u$#W|vKFBw#T4WsS~jpwcF9v{17F-9T;AlJlB31xvM{V?Q?giyk@ zfU);w_+!((bd3#jMtQqaOMVOJ2{=6ecaA!nnpQJm?K@~(kKF1JP1VIz9qJpQ%^T;z z+@t&k>7f+jJVC|aFvka`UZLMj7Zx2Vz_OxYwWIm|l-0(tsJYhM-aSfrk`UQ42&$;pBWO%$ zViqUNsNAVK2$4VQsx*n7Yi(VBBcZb|7b0K4pKea{mSd8i{GIV&j~Y}Lp6;OnAbJKO zJydJ4x(d%}8xtsBhMtdSi0vH=8fJ?>;qWLR@|X(n2TBzwfYz-bp+4_BvVH8Gw8 zr^dg~@<5ADaP|f?ct$Vgf6(9-!7V*y{)t@%Wsk@eKBwj7^>;Dw1ecJ9eC>JIgAKv7 zVwYo04_6REHG#G8KZtkzXSbqV$ZKqCsB9@J-cz_ z@8Py|lN8M7eh)qud%=7cdPG*OQCGt`3*CZH^V!<>-HdI$Y|b<=cuU=!u-_yvz~Vef z`mmN^w$zBVK)(8>+w!m2FFCv={{Dp0@)5IIpgPE9=5W#;aBM{|mGM@i6~ z^<5g%vA0j}W(&}80$cSPYU1zP^%^Or%QmCO%}i@TdMqxe%3QmL4deSS7Q_!I?4|}= zbMMc+c-v=mjZmoZIRzabhY$(7jQ^yl7XHN3&-m@&RByRzm}sz8(P|0%hS1;w`;^*5 zd^A~_M80OgN<@ajs;z7&SRxXL2Zw}x?&$kgK+I4QXrXR#kU4s>^!n!{5VyH?1X_YH z=8W22P6z!K@XyM@sC*h0fvIhsldYz?Iah>*3nzM)n~LxiTZweS7q|OYmkudXoEBqJ zvtwyGwjPkbCien%q7taERa?^XkD#}=QN`wz=(k4xj@l4;UfHoQjjdKM*R5?LG2Rhw z&qDrK4tVYFei^GZRrVAIN_??R4no+)@(Scn>OU5uxRJ4RyPcwla@ z>DENSo&nn2uwXOM^Roz(1*}>KK{deU$S0W$whMT0#p|O988hy)fdhP6VvT2yWlOBm=XN!-d%qSeH4&m%7n}Dss zaZZF?^?>YmJcOM4ER@E4y2+S)bQl|LiLC?v1|1<+J?|7HM`pdl0|;vCP{_)d%Q;r+o6ThH=BX)p?s=&LIj*~T;*fhHJ%qbAm#u?QfPkcD>ZOOPs>YQ-L`9|MNj1#0lcd z$KL(cN!A!+twp5{cEpdX+bXx3Wj6YORa{N4-o>(tq+cX&`?zCC>f^0r2!E>mXa5&Q z(Y4QsS0O`(y;&0cKF^BUJ)|xH&#dymFq1udq7Q|P*|y0>;PVd-+?@%tr=_48QO#$t zZ+-*cjyG3vxsKg$aq9;w<=4}q!(xQu%xms0PDNnB66jpuA`v-(gFt6o^YJ?GA{E^l zAMnaLM}l-{PK%g@?c4%$$3x97=Zk&el|#<=$q4bi(q^?8!X<=nEo`OO{r2)B+5d;F zw+@Rkd;frE*YB>o79t`jwSt78ARr+zuofbvbc2L6h@>)v>#8)1NXMXb4Gz*Vppvk61!Y1pvbB^zHr5v5frz1ZRhIRhbSDdYR>tH{2bYm2Ivv|Z^c5NTGb&qnn6jo^b z6Ahhl!LRowlbDX{9kM5iKErOi=H-)FQb>EfBmd^w+LYx^b3VHeF=c?j4xU&ZDBD~RGq3LC&j8K4MAp3T$d%Z{ z-yJoGcMoCKFMs;9WjzbXNK>4uwKppXw>-|1s)NF;D}EZ7g3gNTFecoaz5?=`ppt>4 zulKi8WYH<6y{WR6z-+Ck&7fw^F%^szFT}2JRMVWQWCxnL7=Fc)-qFJ7QLXzGI<0`5 zT}m1m*11XJio^JNnbKO3iQL1lQhnM_hq7=YrIe(+JmDv<*no@-mzA*?*8+5c4Bqr? z1#^s-kl75~q#?7#p?}_uWzF}%;dxXg2J_$Hl!C(KPyAz|@lVCpHhX#{9~lWw=&npM zxDSh)?DT|__q<6n_|_3V+a?`T#k@@CNQN9K1BEI9&N6b3UzU$wi`QFo`MzSp zXHiw|-mG?DmcPE(O{&)e>x7dw{o`=3qUYMx7J@yv_Uaz>%k6W2%YQ!k@Rt|AbX!Nn zUGK_Nyug5KIQ2M3+$JJjyf228XCp+VJ2ga4p7}I7pp9LlCk_*5&Djd$RTgo{qy&8HF5wL3fI%WGwTF3HI<1(s@vRg_k2Zs8@SiBdDe zE7jwXl)C^YRtC6;rHf&+(5-&b&+i}n7feIUr%d>%l0wVX4r3Yq_8Vy?ABgvoI*Vuw z{OvuuOblhL$sx2TozRpYa~%{1gI^@iM>C~KR!X48<8DC0~?3V z;hg_%A|@wG(&&?TAC3%L@LjjQGEyoKft1ZvL*Rwl13{aYOk%Bqv$W>i$r^e;zBOz+ z?oTU=XL@KMyqdQ&`Gr)7Po-gjph6RQ-<>vl^G!?6J!4^31&&i`!XESgGCweiB=+fm zou&2WPS)^zPI$3B_?EFNs$<#&hX>XX9toz?vn(fW`(w4#WB~dI8W2DP-XXq zZ^n2v4rT^!?lKZ#{3d7!@m8gtZ_n3Skq$z_&Uj3IwEn)cM! zHFG}%Z1DB**g>f6s~1ivdJ402BI&!PoK}cDCz0I6`?%B>!ug5PvGaw9l%=F?h5&z_#hnP8nb=-X9xSKeBU zktpKH&S_>Km7FyOZOSO@U5;`XWJK?+<+%)GgUUTkc%i z=o*a!Tdvw__3Tk>q^ywk_FKMUXkd`_)4Bfo<=4~7a6-a&=b^eslm*f}@=ig0{Sj;& zw*W57;i$;YGp?g`ZYxr{;I={@j9&Uo4PiZvAn>XUif&9dIvKn7rRyaY z=OoQdT3~V&jf(kt0K9y+yXJL^AL;OL$fkOL0Kf4zb<_W6>`DyL&q0zrW$rUhrQBjL2DhX??~#>`H(hWbl>3U z1&;D6c+!{)UqrRX;%$oBUopxiOikHX^xLSjqR=NwCzbo|; zL!19Iw0r}TfqNr>Y_n+GnmadNLp_VIQyLJ=pc4vnntY#M(tCvSrZClWFr`AZ@y%9m_Y)eq!ox{2o=T>@)To z7j7x#ce3WE(rA7AcHKe4bc~h(lk_Rp_DK+zUT|q=3n$U$Kicw6U(IuHZc7xv1CM$ab@JAz`^p-FpGljw}yKQkvJ@HTF8rhMJt?9 zUK%2Os^N`ee%N$Bf~OikW>sgPx$45pwSw-y@%sg(n@F4Ni4Cc*~3?zRE zb}WD@Uak*St~%hq*ONi9GW-aZLy(m$h7YWNnh(;H{V?{B3DiVvlLEUAn@RQFBHz^} zvQu}KmRz3w2eX#5fgjs#i_P_Qvus5gGsp_GP`*t|c)AL2$?PIV5giP%x!IMM9OK0= z{=PLAMdILt%%8rkS7qsh*sBoWLUv3a?tMlJKfi}0IX8a)fDxz%H2G7Ds3MZs5@G${0b;@bZ1|MjXxr;!2pRp6syQ8PrnkWTVU8K10 z$vITh;`i(Nk7-#vegzebfXDmp95FZB`;($C<(FvsaxLCNtZwm@Ti4)dRFm9Bdk+>X z{F01I<@~g9e|gb~H3GdFbWXk;;rrxE=g-*LOyi3D^`gRhg%w?PH3#YcnmTTjnT)P8 z{e&d5KaGV=uI+5M(&;q5TyTH9s`2}JutEL>rl~Y`QY<(8rbmGSC$N(7E!WW^gc8wN zJlC2TV5~JoW==g#5v0mE_;T)(m2DbUE-{Rm`5py=m|36i(1-4 zg>Mlqk$m2>v~oSOsp89$%sN8ZVd?$F$_KF?iR>#fxKgt8n9&kQiJi;9@}?9Ex=UQQ zTFr8jy7)LzREBI&nQ5@Y>r$E~hQd_A#TO-GnhqBW51ec7q|l-}a-9!I0% z(}m)tFuJS8>_lg*Y%Afsulukc>9a5&nlF837<2eU+2jEefyF>%C&A@Sl~JJZCj7eV5Rw%mHJ>~nCw zvEGd-f$SQ~YwOMb3QH9OB^`T-mzjV?QnxdJqpDmii9u)B`C(qn?q~Evb>OQKP5ECf zEdRl_Up}gRT`xV35A78r$y<3P+1D%eXe(BRPsTV3JI(()tNk#J6tmR?Jv(wnKD+`v zW&)43cx-KL%|KCCv@@=O8Qemf*C)+lH0Z=UtG$1u}uzNZZ^2X~fT zkOaed^5yM2QyDo)LB4xoyg;{YAl<&G%HAnT5+c87owwy=ZGfE(vSe|upFlYE1+5C1 zNb&HTT8U4F%JRc%a;O28->Hv|BQKx`K2Co`WXFvuI5?>Bgeu9(eCwmhr1S`pJ0GtQ zU1bd(J$X8Y?e)8An=1Du|HJKtTmD03K1}IE}3G)C5hh zZnqH!7=!yju;D)N++TxSmg8P}TG3UMUV#Yv5J&-pv3E;je0GK&YU{rxlhtnK66M@? z_;28OviR=m?yUwm#3H~^oAXFv+^q3;>01yw_7~az<#q zi>w>XV!);z$dW}z{rW2$&$^7P`nI?tI_gUPJ z`}!MyqV#q!XdfIvs>I8Bg9;jwzY;yUw~^@Gp{iI!>wn|;jh2qkZt@MQYf8pNBbmG zk{BPV`pHjXKTONhR0cLo_s3La!uy@%Vw!kQt*}5jn0N0;#e85!KaKpSzZhZYHPftn z(7LSbGhdS=?9u!$G5JHpsbq61=+?A+B{~)CL5GOFSIRzy`Pq3wyVkK~Yd zK8v8zn@?&BesI%ptF|wen6*6G*iiS~n=&qeHjcjqImF0eP{64_vKqVB5Y*v+y!`?> z29}?p9+Ap-d9J?NhrC>3R+p}NDm7-4mp2{iObsr7LtRfu1bT2bX~cyuz0Kqt@cXP* zl^RY4s3CpHX%Sc=Emvza4mmzk#c60aCvzS-r%Aukowra&Npb|xxkH_kWW0li%RyH1 zv$D_TM>sx2pyAH5tQ;5UqkcNrvt+aFCzw-KbQSL280hJ_mZa>n>X#(!tFPjk!>2-) z@h4RFbaF!|wZ6p>a9H8)#lz8C!GEqTfw^%#^B#HZ$DhWIfw%(ylCk44D5uwK-qji) zT&h+==Fc(OIjvE9rAdC6(?m5qLMt-5Zf{P^xw&SZG7j|mB2fuu)EuS_XOS4I;)#fi zn#8i`1P9@HStCdYC;y$QV970|hfPcPrV8yUzUm!stNIaDju?`~mj6x@U?pg~)Ep)h zs~}9jbW%P`Bl>SEb4y3f9y=Iazh3lsHP(rpox`VvoO_0q|4s$9W5dS@7tH4!S|9~d zK1f2p#Bm79$bx3yA088TU<4{8lehAl-OBsvnCBw}p6(sMwOYQskB!<23wTSm9DZCh zA^O`E#LhR4FPJYKCy4IET&%(w=!Ap?is)d8b1A&+p}bx`rmAFTK*Y6TXOKS{&sQME zef%wCjhuZ>6Jfp`1RO>9Jip!1X!3RPNVWjP9%uWGX0ks~4!vq8{f^xQ&VkSYLWLL+ zq}#chE&?QgrBNa%kt}(ry~V}J)Izoy&%g6KTHzk zqUz%V8o11Ay0c2MYB(C4>sYaCo%DTfd$`@4W}Z@5w@tisn#@I0U7JNa`%s&;==#@e zM@5NQkP_4dCPqsXbiM=wuLUCXNUVWLy&@XDk8pM`M-v`7kRHAgW6Urn&`f4IAMNP& z(f|Nq-!cnJD9sT;cO%{4N}QPRu4_J7b7^HJ-`d)>fWxC|sb!;Jvyx?3%+}R^u zT4a=w|CcJqe?oBZrDLFOO9vch0IP?#p01Nbn}m_YXq$HN=4}~d9gB~OTNE@Gye)Om zR|&o_DnRnTh%Y?JTY6i%|K184Ws=)wk;YxNJ*|Z^!~VdKeTW~8G$uWfGc4mq9d@M% zHCOMw5OyB_`r=Pgf%YFyX`>!gdoJ#5Pa1_)@*OVLrw|iVBrJf@P7C}_tu15}H;O4~ z-vuE|gfDg}qaj#=PL1Tg`{__KWAbl+&wwg@-nD9N>(Da!spC95Nc>Kc1=mxI>XsH( zG$xI(^f2|-zn=yoq}SHD-TEqnme%ysp@y^a70! zn=}4u1q(L&nUeVo02rU~ufO1dTYT3hfLKlgCT~S`V&y9ai>rlEaV6J_mmOLsBNWy{ zz`+J2LAgW;I`*Q+kWB8-a@mtA!j|I&gQcG?MueU3mWJHU?A2~E2Y)(Xkjy^C>KI^i zeBO)*MK(urh)$-SUKA;!fH~0wLqWyd#je#@mnlblrD>9vbwt(?DJC2`_b{9?7IZZy zFG1msi=6>tF_aPCxoZ@Qc(9VKD`TgAViu&SiHCMD1#14;kE?$;Dy;foJbQbJ0r&@K zS$^!MUWA$T#4_1Ef09b&y(A|9Irv&_AHGMc!*X>#yFgdc#VkQ4*@_IvtS70ot$YN= zVz$@B@J?>=qR&cJyLGQs@wCfeJSbzY{xsI4YAcI`a?tq+I3-h-L_3k;;ir>?%8MoA zC4!iPmJAUh%z^mJ89+VRqxfb`wMF0D;ty+HlkgB^NYa$VnV(MsUY1iY)U>)ZDV) zz~r(*aRcv)KI0BZ@y_pkg#Mb>@YJKrn>epCVa%bD2f|H| z6~(ShxIL#uv$K3OVy803yEo7mpfqf2x{=#e*~JIDlAd&B~$C#t7S z&SJnWlkmOF$Ov3U`3t6a`)^Iax6>U6-V6+?I+Qza`6ztB~! za3xms^6o~iGRgJxYX?u>iK~Vse_fOsJgd`YaDkYZe6zJc@6jI1dL%oCY?qQnsookN zr`H7^3HWpD&VDRHcVZcB(VogY*g~?3epe_A_TxmCyuX1ecd(JcxS8;9JXJ0!U7K$b z&h#iDBh_pxRT2YhFyJ6Y%GX9_*WMlBiazz}) z*f#m{xlaAt8$aX8gFmRVmVbQt#=xxMpK{$Y8#Nw|2S(?WrjAX{tO zfb&Ie4-Y|zIi7E2Ir?m4l6ky?ye%E7dUgH=#Zh`i8)*All(LU~SYWA3d#YrXZwGl> zeAlVEV5AIzs-7lCwNbTb#MaV7dT0R}ziac7;wbrxRI1M2yby1In;(g8JhI?!u?i9b z=J;Gi=V9jtQkJ(8ZkYwWn1FsyT=x#z6smprrQ!2hTIYhSR01Eq^r9I1i)+7=JwZRg zZ&M8lkdgP?OofgI$&mYCw(=JW<7?P#cgMD8U%2^Rhi6|2x&u7L*^Ay3t_#pa)bvf9 zzWL^da^eB6v60rf65wN)zqvpvUiZX;O7&yC@6-EZiarYZtgQz%DnP3fCU1iEH!5j< zqhxKI2CNztpl}ess3`p0pQO|<1*)?s|H1}v2E?sIV^gW4r8g?<^)x{&l>+pxcW9HQ zLaCrDnL%kR1-KLt!{9=PBCxGEvT{n&F7OY_>aJ&^A!>xwhJCt*anp z2>%edJcag*BCl$RnzT)hkOoI+kFV&SBc`9>W4>F{DWboVOZokjW06810W!F|ci1@Y z{boq7|JjklL@8O>f8TbVIdewp1?yFS$r6-uu#;a2eU*mbmb2`e1Bk((B-#SkZ~H zm4F?kmKFu`1ySePXN?WHvI<(4Nk*CL)vsg&Uh`MqXn2qd3RCg{aUFG$MIymNxpnl` z4$-gmJIJ7%4qf}h!MdYupAoUf3eem|TxxS%{uw7nF&?73GqU9jzHDj z^NwhNV8st8*F-Y^;Z3p&-7<34$#tCLH;{X>mA?J`S!3P54RQS3L@VX@l)8!|i{Gup z>tc1r*%pSMS}q)25#vY6ZN)H;{Fg`GU1lnkYkusy7hj=VxR8OIeRE9d^5_lW zazTwW3*w4Dy3r=KICxuxKgd}LiE3Y6?RrQE4$0}bAr)sENwR0mh_jSCZ@CE4Mz0DsaGM+)ItkvI*L%0O&q34reQr|wNKTOK4jp>!?QD0_t%RWvX#0Jwqssmq z3C}cpEj1_<7E+;N5Xb)Jdn+7+gzV`Zvn>1lqNB^_CZ0`%Ka76}CuWaLpJ-61Hp?ql z=J9N=r-LJq$t?_f^5_bCG=|{+H9ey5PZH;(|EA4r=tk(lgZ4#*X8VO)yp6s|?)OqP znn{3q;v|&_-kF;G^@n@a_4p0BtJyJ$ZliE#eAdxBBC(k_NwPC2MjM<826I>Y%3yp) zG#Ee8RX?{yY`!}mevglYgx3oBC~2bg7iQ?QCfj|r&b4DL{Ev}0Ebk)LQ*E^Fk=}y| zAt{1W)ACN}EKXF;i_ZuBEjDnd8Z(tFNnj+LLhun8Hp@l*ywEKx%xR8l=fC&9htIV1 zM+EpE$}Dwy&VXisWD(n?DW#z8LUZHM{A!`;eFwLp=RA@AN)z6*ggCdBmhqWb9)^f1zswAs-`g4*u!AebvSIK!FKg zSJsNdZR%jEZhmA)(wktnGJfVEQDgOc@7WrbTRN0M%5%O6tJV({-E6YBuc z{7PwU$5}oDWT!5JoQ#tyYbhaR{@cf1Fu1tdwf6cJwK9XKe1zSn6jU@HJIk6fUP~^5JC)>+Bm>ID7=*KDd8o`OmiW z)5Xh3q2N}k!FFG@$#8n&1QkvkwwtP2a7dqn2WOS07V;CsFJe5V7p-&g;iTs{wMgV^ zvfpW6zvbj5f_b`zV_a~OvDn(4t~eDI=bC*ii^yoxZ`2j6tMf-<-%iM}PgE~u1rB*l zD+dTwZTxqSn?dUyNz(h?Q|Qo{^59DSj4%3{W4Xt(`bDn%qzjO+T{=US_24U1`JwNM z6{`D90)D#Mcd{&SXl6oEv|aAeA%7}S)BNVCns&D`svUv&vAg5kh!wM3Xu4g;oxye? zgF`Hhf_u9ZXz-(V&~mX#@M^0<|A;+lxiI7^g4s{v!oGXyo==7+W>(r@yScc2$%Sk$ zv$?q9f{lhaS3{rvht8wDcnUPUd_xh+Yn)dN(HGmxrjvBXy0wTFXdp?H_f)J*Ie8VP zPaaaR;cmh+Ujs&t@&kWk4awRga zW7u5N&}F#`ZJ^;m7Rs7$R7eeGjy{~q+Fo5@&(W3h8Q$I=KmREuOI+g>6cNB+*Gj2e zt2IXJ1|DvYnLF2mqQR}Ivi2R@!_ssi5Q|CX!t@7~#IK7`2w`XmxB)u2fw1(}_C6Ll zab5In(``nQV$^gdQ+RyB;A$d3)QzqBr63)hR`}NIBRCG#LAf)3k`3n|J4GN(b@ls3 z&(9dTsLAZLHWX~QiqM;QH@?Sja`Ea=r1vUCB5FQ<0v0a8txpzh?_eQuZ)I)%j6>J)yw4w;av6cH?) zdokO})Y6|3oD;OrF3YnS-}!)Sh_1QP2=^tjM2^iQj$r-n{?x+Jr5pSx_{{8JURJTx zXOe38)96-s6W}TsYUthgHgG7Qhqu9F+{5B}|F$*p(jUw{5?Mf2xLeu^sMAcNg?}CV z_`1gU$%8|d`(Qusr~*%9E*aaI;Z)JQXrrQJmEbY9r~A>~yM9qLqt!pG^11D$GXeaU zNGAFF9g1~fewt)Fc44J@H%YfVJV}7^GO|~bLN^4J zZnyrwbbEf-C2XVV;PwyE7}s2*0`Upkotwf3`z2uwMZQM*!qU#Wb=&QP(BOs(9Nt9r zm-oeMcoL}%Gwrlp{#q|LUiz1#x0zP`;+TROgg@WzWxS=}4;M{M`SNrJ&T*7J_L0FO zT>gIv7fvG_=sID*G-Kjn0vw+!1|8{d@qHew7o23x@+qJlu0W--<}|!e1ile*^n<9$ z6$6fe|3WwtZb*j?dl*s(_mxJtW8kTbV8uq$cD|gmzHCnrbTnIyZ#djU%y=s_%$6&Y2UDFud96IVr9is6EfL=<-}(T(0X$Wsd1a40L-C|Bke^xpdJy!v+qMOQni4(7FQj+It=;B_UP0!EkL0C_!| z2PH$8mGsQ-qR3oj3}qqLOjlo-A#9e-!PP}2{>Ma>0)q#=U6*Q(lC+~(kxJ*&EW=On zXqJ^QfT2dhUSq9_&T|!XVXrT6JR*uljVUf;izFKI)m~4vlz=erLB4kV;!ECT4y5c< z%}=JO@+<|5l1;{nt+fz~3R?;;3K)CY`1=jOj+}9ld-{~jK$!GR$|`1=00U9w=8B;3 zQOd(BksUnO`|If1dGsx%r0Z_8N67~%1cjp#KqOY&}Z|Zb7xXY&VQsJVfHvhI%)(*I+KEqJL;b< zfupm{#J!Mce3j_q#=7)4G9tlt11SvusB}(tg7xqTfUkn|JhRRAp$MFkleDC5cNTT< zTk7B*N8?fv!M(YcUHw{o25P(vc%h8;K+(`K-k-$On<^lKkDz7mWZc2am0Ur^s93i) zSgypgxoooNP*lK|YFynx(E>vRHvKo}7KR-D$8S{F*&v$&1x;it=yyQX+RD^B9TJUl zh7xj7xi&d5Rpy(M+ua6qp~K1)lj(OrWx4&p3WxD|ri01PL>r$G5wMNT`h;yY-wFV% zNpmHNh`cY=R1{|e&q(56{WjIZP;gl&A8MKi6h1IBcTuuVvlJ zbTjwa$u1(Xlg#I{$(&)frf^0+mJpp66gK|E7X+RUg{7CiHsPmA*jS5VH7W4XN44Fz zZ{N<+3|v^zzi@akqQIrY7unX`bbD_t{&B-l{-6Xz@H0d$gZ=bx40Pavy8xc1j&x|Z z@`PjutW<7&m9(93rpUG?6&He^sO8xP0NFn9#O_Q74xtHQK~B{(Utl450{B@HL_99(8gD)_;%guMYS`*%qdIn#wK?fvu zNE#5(?N<^3Uq(?DbHW8oi;Ix+`ODFyvfQkAW_GxAF?h&%`<*a_f+v2Xu=85*{zj7+ z9v^?ME_=G2mvS;lKD)=As zDkGUEWcEKYPv}xb>j@RyZao0rR`lqer=iQt+dw@Ldgt#=qS#tq)es92;U8oGe?)Z! zUdPl6U2l{d-2SBpx`UB_nir%X51$Rc z7cRbN@QS4SHBYDk->n*eLWb*WuY^6mHwWs0fc9{2TF<#b*h_&kJoBa$kw>PI6s++3 zdl{2vhr@x`s3X@YKSl-F%RJ%J5iV@Ll}^FWyT4M*33KYPdUXIQ_0DsL`&Yr=^?SkQ zTcN%_J-L}}soDRL%5^herudeor}aV3336nx_u=RS-OfDUN zB}X3_RGCYY_RLCx>+Sd-gRu9FYQhDi=h!;PF=^c+(ea}?V^>%Kq?{vC2j(73*}(eC8FAM-x&6KOr5gcS28q>M zF;%z^677%^K1A{jU&gBu-%Nq0sdS(c;&%N%Br#x-=2w#)Hg|Q{o3VM*@MjON_C+@0 z^fm2N8HN{Rmx&7Q)_>XVN$GHMI{^7_+;OKQMj&oS5=h>K=>R2osA&NuDzFlCdBMIr zeYH?zB8(i=WQeZZ|Ic%&RgtpWAFEX0Yx%L)5W2l5#2zXqSG=JSc5v`R7H2Z?^g(b| z$EhczT$vw+3I`R*AXgT)V#IE8FlOF08lE2mMU2s)aNpk8XD?0i^5rKej)Pxfp}mR$ zF}o*?H2l=J!*Xg|7PH>}(i~yl2ID0$ch-wS{Eshz-hmjf2VW!yr)MiOt4G{47Z>zt z`5T{pYttX{)&3>X-&>6$%I-BEX5w+5K&lO7NqutGN;uFui0^Hm%}&U1`s?q=nJ|aR4OI5+ z#5OK7AonZIcWK6-|3(TGU`MFYjj_coFTsNay0AtdqCx6Y?-kG!|GqArTzXR{ObFxH zenMLP5oOBipM$O=F%?fN_ZFPkL$rY6u4#(U&s2k}DRm2{{2`C&Ptq2Cr)JS_*R2*g z;9r@@vH};yJe$=#Y&ofq?v-04c{#^e9x>ie&9+pvkE9(;ry0VY(&1? zU+p)1s=ua!;E#Ap0DE9W;X<%3loLQ6)_;Mr`FZVXal*n-bQB6O<)|ddcGi5O$V83* zr}wf}D;@V`Ft|4V& zV82U%FNhm}qJ#+0kUJ36iEjX{0aC!7bx?fv0%+a!90B^Ahk%k{whBaxK4`--?1B#g z!Zt8bbUiSw`I!+9BNDOyi*q}kaEB+7b=!}`v6m2F(bMDOC-!)t9!i6tQ2TUIad9!! zQv>R!fhxrY#z=cOg2J^PhVz`4OiGO5&-Zf`@iS_P%pAT;zmU_NFz5`x^j`w# zH(9oGkl@ljG@oQ4a^5P@P5bt3sGXG~sOR@f6LGH}xej(G<2=_&oWg>9(AWA4-L+b` z&YN)PR3FZk?&YSKHJ9%4Un2Q)%!r2*36#44!`+}IhrHVuXkSH7-__Z98Z^)upT>Z$ z9nNd}psFc7)S|K4iilZ*Ns!f&^V!nfI|!dCmdaP;#hLS}1g)tJP6UvSBVa-=e?3$4 z#}+8Cf9G~xp-m5G;`pZTxl=MfY6547o7cRhWCa(juyi&QUYP?~>5}r9NxcXxK=-LS zF5(D3IPmeW{p059S~uU9!^LT~I@vTbCJlPST3cKD_xJbfVh}s04p{`Wa|f3ZpfVS_X7>nM;n>V`bYwp(H7RP^9=HzaElCo-hIG#ia{6n@$se z)J2b0XZiJZcNf$iHoUv6;eMF~T7|q5fCHAp-FOAudq$RGDGmcvrfu4S8X=y)-DCx;1+&Kog9A8fvpOPPCdF?$- zw-ek(G0vau=={4jVt-L3nc{0AKvaWGL9d(mm5TCmR#vxfg9eLBKdRA)ozpQN`smsplDRYff>DJ?>s}4CAET4nn77 zT)G+>J=27!sHo`CiQ(Z3^z1k@II%3yH8Z!#W6phSYK9jn89>BDIot4AK>YJt5JIkSl7Bn6Bqe}XQS|NMed`L7m56?`3ij9YhT(x#d^?W zjwy&qC=|TRnOpBa?D+wDP&r|cph>EHJ!#i|BZtLO2RM(g`V;{PN4F#uUR;9CA`8b2 z8~tEZ%ZVo>cNhlHb0^I)@-_2bYeK&_Mw~nP<4iDi$ZrItwa?MhLx5)?5^HF!*7C zF;~QW&3BQuBJuZ?Mue_8#ylW3KwyuOX{6+zj{P|h6HJk9(nk zh6-YSqX)s1lYw2~0u8S>L4Ns60G|MN-{ZHIzoieiS!P)DT1qfRH!G>R z4QT>y#?Qp}&wH`E4ZYJy1nct4wgHLeskNj~ff-rGJR^g{*sjcBSHzFHEXbYO6JR>WoWbdz^{)#34=1+o-@)D=Q#+2jUO-=%&_oz z&d@PmM6grqT?G_#;?9?8J1Z13o9sY4LRjkQEBKA8Hktg9oFHk}RUBf1nG?ve_>dc; z89k)=;C^&;q{v)bTH4j$($}Z=;|E0N3I^`;M8-oae#%w`UL6c@I}hclHS89TYH#&g zszZxyII0$NA!R}I2+L_O4vb8E%LhDo+cz-NNqw~09pzXNZcff9ArtlpUNzKH&foPx zMB9!)&!Dps*1OZv#=V-i&ZUUz3?jzX6HthqHR-3XzB27QJut~^yHwGl9J?A3Ee^P9 zym0wO-$3ezizD6>x)Gtfajw@bG`b!{EF0%50yiXgli2asT^$7D5~EdDFdicjBvcp4 zx?SnfK927Ksm!rBTQZPbQ{VBv;}_is>Z;0Xai0kQ0qh6yh(P}r`${F?#Gw3GeO^?e z)bsnwVBfBj$GXP*T-rADTO9Q4R`_pQVZ0{NH$s!DmSn@e>l2oJkVf7I<=_bL!t+4o zG48bGZuQ#qRdtp8FY5W68s;inF5)YBb4d~wJ*rUa0Y9JaaDh6=rcaEi;`ze;JxA!h z>Um8hWYdFPhDo4Q^WcNu|6swobDARESr)xkgu@9?s&dRS17((vRbMGk_jr_5SyMQN znYm%~YWAxW|1Cy=O^4@>sB1EDszy6zu2VrExEZj@*Nax%;OE|ua?48B>9Scj|Uig%C@6y>gz&Yk2ARwjqBuo650Wz4W_@PXozvNW1B9zZC+f7aX zMVVxN^cWvO50#?aPFI6r#}W+=nz#j>H`^5V9w%tpZ#V2s?y5d^$%avH5>quMC0>Qo}&Vl!7|oMLJcoEmRso=79XikQL9|` zi>c};rILl13pRbj9TAY(k)8q{**av26#_)bNa!6NGP1XwTNLCga<8(^v+Kbi3f&#_ zg>D;kyQu`ZP2zXhjF48&9QOC6sRk zX9M0jNU1m7=BR=W+FwV26YJ3@1Kp+A%?$HMEb+-Fti~TYy{t+8PnFwTlmC2*En#^w z9b8JuNbNN;o%hD78|ZjE%PJD(a7Z**mN1ftjM0}`>*@{zn3j(^Ujwt zy=nMk_y{)Bn~`*z2KJf@ifLZw*-M=taush7w;)Vzh%?4PUmQiJrS^_$sN2r9-cho8 zKj6KEVPcBamtGUrrL(e~Ybt+Qi;|T~RIbY`R|W>#9^)=oe*AvQLNiCzmUv$(*WCZ0 zW&HhpCxp$pzA^&~MOOwokKzf?$(G@d@*`O5da|4NhNdb1KBzR*a4m;=75TIRf5d^KnJu@U&6x_DBkXleoJh~Fa$A|9POSyZU z&j&TA!I54t_{j22b+4@=Lik7}#--k+SwKi85KdIA?}2Lq`334* z`!VJAkT(>wySy;C3kWd+n!x3?rS5J+w&aAYn#=&}M3tNK-tK3s5`s7O@==vQaD=PR zDsI_Y^NM7JljosqkE#FtOJ^R&E8cP7VXP2a8ny$WH8@kSzsyq<&Gha$Q>1vY)8%U$ zNky*Hgw2qp4=TKOKx)W4BS|M-ag(3lEjp?}+?6S^VKdjEYy}gF>bR|m9jj`uHzanG%6!GKV zEcDj)?k~zDM%9=S5Be*wWLa?D-AXui-I5nsG55dplS^fTolzr?1F{M_dyUNI`Ogxj zWS9JlQ8z(G4cv>9)-klrR%tAKZ6BQ1v03g8sM2RukzX|pvQFXmd3NWFz&f~^8bS?6 zT{{X&cSE_+Po*bXdaX3@81v20HRB>*a z@DYV$5oe4NF*v4ja>zPcQGxS4RIuIE%iY@x${ac4WWh5`(dssD+=T+?X*8jZV1jlE zcKCYA^d?v|`28*osjTTeNLr#*?ih&|_4tNYfH<^a(@;$~OF+DE%FNqKC)aMt*kGcc zabe{7ilTMKML(6klh(0e2M#^mlOJ*hg(?0mx zev#><^j;<6{h-mkc`fyMBTfAQI|pw<6k%!q75?Bqp(rzOH7D10dG!0t^}bY)M-vA} z8g+nG%HRZV`}HybA2JlvSvuuo=q(P6s~iUCh^da-^l^N}D*;@bbacZ@P$=JH1-cms z!c{paC6W48^&#mB8#!_6{y-`+7#|`APADE{Bt;9H^=iIngf1?c=>i1NuP}44^KiCt zup5Z7v(o{WuYf?<8@X5kOS6WyP#HP3#G!$lyYY8c`hennG0Xb2KV@H~1G&DpYk3z2 zo7EArU-V}I4g%!LXqVU%!ssuZq4)LZTy`GM*$f*Z1hdtUJ=YVK#Qe|xm5BQd;D|J4 zun_iWJ(!Rq&O}xn}U{6k#UEEjY{7)P@@Z z?&G`yl^Eh}&2E~jfXj{A;MJvj??09c*kS4Dy3tnx;P}qL#l(TG_e709*p3)?Sr9 z`|jerk*I|IJhf0#;QhKjJHGv+J>N8XWr&+iRRw!rZ}aRm4#^PlE`9X4Mb zWb;ymvP>Q0RYx1W2e!T(F5amSumJ@lTB4RDRp*&`<@Nu#*qK_WXd^Y#^!o+~} zwW&4NTCYcRzfhG@o_%#Q`Lv+uEA`>3>n@n)^IgYi`OsGyNV@bhP%q(k;b>)i?j#Ke{!rVUZ#o5qO&u zy*dGCp8KB>&IM2BAxCdgKGPHvBg9d(`PQYs!9Z~AT)vF&tyGPsh+a~>49gJs1?aM$ zsHRA)f6-X1?nq+5(#u9vzNVe9@I*q=b%Q{BnL2CC)rNyIbs``0n|h2aLlb2lyP ztKTsEDKI7V_RX(~J6y_I`XV-$oeIvtU>W{Ti_RUcsBMA^q~pum?J0yX{df>t7YPls zR__OmVE5jSQ@?NZtBxjM$rHowth;_z-fEkr^GcEW%B>P#?kT-g3<gU;IEV(oz`uYAaDpR~*W-rvvu$iT`-bT5Ke*b? zncJr`n}14UM&2l#D9t}>E?Un>C(st1jo&`l@kAW%JzVH+sx#xuq1SB|`=VGiq>Jom zFm*EbHG;Fmx<0+*^*hXTBS+H?pZBH_>+9p>fmp~ef0)lvb-?87_}BP)!6)2+;Na=G zcSs2+5{1EZZ~S@>{3qc|`KP(r_H^9hi)HhEc6M$4`ayeV;apd~kG(Od>n8eTAJx;e z!@g(TJfX%Is#j4g{Yeray+s<=tH|&x;ua91;m>C4d|YB(U2LpW3{i-9!}N^J3hwHnCsg#WQQ#= zE*3i&IsTe+AYAHCHj-tpxrovRe|;--=X{j5qt4y%I8azhBe-kMEV%uwnQMCKr0zxV z|IofDa5cF-G8=7&5OfdeU@Hpz3chNv+<6 zjZ&|H7u$_qn|0ND){^I%73gZVEPC+eOirPP&P_x3$`D!JEXW}PKTjS=++o)9&8dLI z=5HdK`-ZWX((XQ(d!TFMS`JuV&m1pUH)t>1+_xLJE@hBgx2Vg_&LaK%fsKO)&+lFZ zA*SkTE^oWBQcNtn1;U+)A(|p+PHVt~*@1uy%AoNwzca-y5*5cf6@ zpN*V2YjmDtinsUou2pAaM1(8k8{dkG*L*M9A%*7Zx${51;$;aN#LiQ$ufR zlHrL>&>KoLp%5*^bj!PJaB$%KOWC2>d)(1g3NXLEWZkR=HoA~{xv?!x#M=8VJ}ol)HwxyA;PNSLp+ylwWEGz^m^GqQo zmgg>A@k-!8UAO-L28+5?WgK=iDK0es(Ao*He|Vx%BjR`?LfJ`>O~**`Y_)Tj9v+*w z@6%4g0aGn{5E;PydXzcC{V}M}*@b!k8HYt1J)OKh^FsgyG1me8yI%gC7JD;TnGq-s zGf>!;aoi;02G2|jzVoYcOB_nsRyeuveaU&v`Gw$+pasL+C4i4}SRd(p3Pk*+!Qt#f_*XV11E@y03X0R?e~W00PJUrq!OBop3)pz_4#wAn`q zCZS+Vlw19MvAY>T!cskd@P;U}#bsZY>H9-4M49n}Zq`*h-?cPb8F6u@u#mQ*Tqyb; zj5j*L%f_zVT;DJb1jd3Z@xGv)6Raloy>Wiy%UV%qLf9tY4qYRT5XL4I=4OF78yzadmB2n;APOt2 zLl3Y@4A{<|#DL@C8^yWtSGCX8-mBF({wAGIMXZ>RejGxz+^W9$u0Q@(VZy#qV=v=L zRKB9BdSQMgSf%aH`?tIkgWjjTSXRFD>K%Yc&AdJwKvO(;QgLQtyXhWe@=MuE+#GcF zwvNBS{y*(~c|6qX`}cI5(rI&~D3TLJ5)NwYbjnF~;n+npBHJKqw$X8vC442>NwyFYy z@1My-x6W^8(Bg)f{`hls%=h?ogF9nK^ zITvMT0+Wz4k|Li-^ShfCo_YQG>!N!zSvom&ER5r$fecHEQ?h*r&kM#Rb{)LTd5C{y zyhBLmfG;{GR=7c&Ylv-k>ybBQE-AF`dZ~wT*uda$sl?Qzf817lGkv^BjM97T7oT0dA;-eW*DBj%{F)&d?Y4Dnj^vgLfQJmRq@#Mj*F@t*JM3Df^1ulpBn z`hRnwDW$Op@0SpSBq2hcfJ3>z8?WN&>9Stwl_y50zI7{LR%TDUbonR2tGt&u7yO$V zu(>)Rii(z_?d^RpuF94!-_?#lUwy9|x5=cYH~lg7MGhZIBH?4yB}!7*e3nZn)V%Q#`evu(-hiHgG4Ip1(ldKaO|}RI7fYId{*RIo4YPEj0MxORROy zS)eYx(nUYNSGf+1Q6)vJ?r&k!oBS4sBHanUvM?>gSs4_jXYruX@0D)sXvC#<5l;iK zg{vLr-{)Y_TZtE%cfWt7YU8W^HW9m(qLax<9EIgnQMN`LdUxKO%dIfO+Hy`BepcUe zUoLb(B!XZTo+FMW05cPA)6{J^8;4U>?U()|;B(`9KK`Q`Cox{dglum$=qpUs8Kz?z zCD{g1ZMB-FSnCe*a%vsFGQ`($!J#@247Psojks%_d}es3k4l(n&nL`;QxFE`D36X) zuNO`zk<0a>K3?xm@Duq@Tdr9KeMmTX{7ln`dnjbXVAKi}XIsf$O$IQk$^>K^Urr6r zP}7Qy78bTK9YuJaU|m>ZtQ2_%mbYvs&fe{eVE>%jdTP3@4i4JZAC5p^9u6O?3B2@& zwc{`giz6{*)Nz7m)Nwbl@$ZD1<##PR2^4`gBQCw!Gf$&-2d_Eb^O8Zj1TMaR_-twS z-@s?(FqpvDaJglbtP}D%^M-cBV?H)^mdJEgjC%nXEGs5m)1WxfdL_WSvkpN$F1d$a zVxe4t!-Iq6<*~Tbx98tp`IHct`k&*d=D)#Oc~S~#au9j2lu8puOqGu8d_%^wKzD?Q zEhhLOZe7CuSFXUnoFHsNE*T>l)dZg2s(Pu_BqL27fYs z>3uoUP!@3!8N6(UJ?5arPsI4tuBhN->UeRU7#e*;#imTkc>U-#l_En~A1Nu8Vb9rX zUiL)l`Vf{8vuix{S0dI#j-|RHg;}C(w!)>I*q48uMAu18j3Uifn4N62?9BIm!&l6- zdWJ&vs65;0-vr~~X8NHg`y*G?g%t9&jEWPLyA8nyo=cSZB$fJn=2pGt@aN75^y(N! z8`#rTKE!KWS4LR|0M-LUeXV2Kgv;9*7DzrKDfw74o>^W@RyI0NjBaiZ36c>fh+{(FDLYJndvlN5m&1$*gWDDr-eJtph%#_Ji}e*F|H8`CB4iQzj>EWu&AA3ev@+aH?8Tr&On#>ooD$ za-d~#o6;svk2xGVmB3%i82loO?Fd~{@fY{eC`w`^&DBIa*>g1z zPGH?1oLd!&Nqps+(nl^>y((fF8o2Rb+uXhEUJFpg)0);lMxd9+A*LwKpR@DQ zFgb0XuusM$M>ou5Yl2LbjewDzNx--1Y`ZV$oM_^cBVFXi%=G6HzQh*=C#MM7l)Wg$ z(j_sIBm^O5?R&Bl2>m#QyZ*LmgO)7zp7dv7JD}ZC}dC!X3Rj=l$+Uc$V^U1%2TakIljoT zQIa8l>f!`_@&z^0b&kf5t{L6y%8pKOMO&>rIML5(JjkoMW6*my%LQ8TsnG zgl3T+l3t1b;pK*{)Sb6Q@^{|4s2Mx2#v-Vgl2?d5T~1p>fU<8=HoMspTFwTM%c}f6oxDDK`=A2ELNl$N(b&3cXQLx-Iq@kEimSrlgW*vBE1nIhf9_l z{Earr{|I_h{%P?BI1;)+qzhM*d5h-ne8sLKyuKB9{cJ!2ho@0b%SnMJ`=8wej%e=) zF^;zXDI~~P9pv?wF)7jwqoaU~#awJ|=uOQ`agG_!f+zkjUgVaG4pIHY-qjlOL(hdM z#{QO~7&kMzvHQh6dum$$INI7)e0luE7=61Xh{N8Uq-C_h zZDVD-p{&B7!bsdo4;C>prdla8GqWs>LEfn0{gy?jVHBwIO~z@hY1u1kR*>JA@j&9< zz!lj(gz$tu&s27}1eBLdp_BqH8_USav6a_Puo1T=l$%#w(7>EFZpb=PHSOc&yEudg z2ic5zoNq#ORGde5GN^1hIv_dx2^|4hm^#QWY;WJ5wg`^2I5l-^Us^r2S5@T2!|0B? zrubt0z=ryXP=kTtZe|8j%uNaYq!5>ldXJZ-XFoueLXLxBL)F=`GUs4(4xnS#b;(uL z%eBBYvBdR3%*cJ>y4TY-p|&e2egFwF_BYz@-c4B&U%hFn*HGZYN$;$YYl& zEmO{#>gDVYU2g&NEt#C_9{R~T4n3~lRb)G|QZG4E#$&fgUqFa*h^GVB%aD%DFXSs} z0T7h|67#Y(K~xqU+jS7ZqY%NL1R-rD(7$xOwCq&0#92t4WluHL_Rp&>yKx@k&6#Vh zg`C*kM`QjiVKrN&y&Hsc1;+38wf+20J zSg+y}jN?JKjY^Z1gU406qlK@gSDwjBw3XT{mu6*9@=oAvBdiFK7OZW{#b5&ZxWGEe zftO$RHDpFlY+0&^$mu<$#9(HV1nPP}e%_Y?StN)nPA>3;2lN7g9q4hXD}FQ&*#ozD z(jbmw5`>{ks4uc-YO%mO!cA%1C(ICutshjK03O&YZ}c?355sdm-`w-p>?@Gz#M~QW zU}bDMAt0ii4agKK|MmoPxH>PZpVE{4&!sxs<7~1$A8>}ZrKRx2g?b5*)|aH*o6}Nm z1dP7b$?U-Zan(LN7Hx~m7)NX)h+%20cX)fN3Tej#JAT6QP-MiAB{#E3S6px*V;Od! zY+fScG`hbySGqWNF34DMI(3a6UZH-WId+ug-+*@v8@5Nj6f(Cvb@c;ha!Rvy`axBs z-a;7y%~PX{C50g?k2bVaKSp9ORnA)J0Zh(2i z_EMSX)a1w1SA{%9gt?|vZsMS;g;slac&IV&=@^|3^K{G1J9&!vTEV553*$lE(cr^# zb*yaTO8PKGr8%{#$;Y)1r``N?zsfQ0&cy~*%9kgBq30pahU`O&jTE*yww1=fm$?k> zXPbZL75s4ztl|yn7VL|Ksifd@@pdz1VFsqabILSB8CrhrcvQNG`E|MtLB=51@ z2^G+w+)u-e`6mU6F1Dh1ZwN-|h&tB3)Z$HeYFoyv*_0a5-acp%+fbm=6WkCr%f}LU zi;pE-m6tp6mcpg(L0*=SEv*o{OMTfkx!R2Z9%8?^{>rDDr`Q$exw6@@#l#tWU$ymh zm-X$;o)oPt#Ztpyh@Zx+y(v}oWyS8<#yMB1#?9Rqb*yh+)JxIN{XJPL1cGe;2c!DQ z2HLhgm!D^e82zn~#%Wb-35${RquS~r+krT=`l=fMvSSi6>Z1yjDARv2xu(Ml`>Y-EI_a$JB~$)4X6ms1ix`P_qI%U1&VSw&!6pF3_=fV5X21LZXb$BfOl(;{Pa* zMGO29w>1`Rch>Zx9=1v+#9!+wYg3C|Ci~KuwT{hkHGvD^0Us z5!2TSpASAtgM?6;vS1KQRFC*fS#8ygmsNbxr*-xap_^j>f(VFwtXp=JmwMlAWfggx z5#Wfte{56f%C7CPT&AHY)$^}-BA+^6VxBVR?r6L)qf<}{Z766>j{qkBdvQvs-TXvH z<_FmweMQlNmaRpVKW}arOG!y<1@E0+jdFNE&sv}^7ph8~j&T+RE%lR{#JXlee%9hK z>yqI8wAY}T_PtNW{{|&psb50q?L#|#wP&EXi#{!l6|-$M_&)<(J_|0d3?xQg$qiOJHuSR3ZEoDvYAU+v?a z+WB~CU%c)6_wR$XC+DlS=1;M+TRSPD95x*i_xnbEK64l)(D0BIrTf_F^0lo$E=8W= z;0O)~V5|D-AKpCHKge7&O}=E0>c14#Tvx|tIyjF;-DhWK4-zygep`MNa;uVUKh!ii zgu-`?%D{KMmt=kE4d*}f%)@u)n3vNQ+5g+M>Lzpq`u4^(mH*qX13fD5*t`8#yYQr% z%*G#S*@$mXeYuCY|9@YP7;F&nJ47JR!6j>z3*K|MpHRAy4P1B`xs67RdZ<&`lvt=( zdX<$W0DC;!_7z;XvlL6LwJRa9!XjOg-48YVmTU7QO0HX_fb0C=X%dN)?>g5{SSJUM zzUQJkd~9ngDVIg*yUdqPB(9IjQ%4DlO#;~F7lPp}7I4f?Lnx;-SqeE9mbz1ylDNv<7VgNncB+a0bZ%db zqsM~hFvkLZcWYGNzQqIzYba31B3XCPXpsy*VF4J+hue?RP5b=$GmM3H5{Z*&?4&m6 zF_Ys+G;ynwT^nREUH^xTOb!+f4G2IarZnkQiCS1%4h)K*-VS@Ok=X3r-*bSVQm{#U z#qQ2R*zR`Q?o<5k8a}L~Y9$9Mq&kzRS8~gyUq{EpSUHuT-bl@o%Eh_4(RTeOh^BaN zuZ?EY(DH>U>8dT#LcOB=W_xeNLnj5;Q)4M9%qFk0I3(eRnyF;nth#}r)neGkOU`CL z%B-7~$W#ANxi-X+v|h`{9oJI7HM~o8Enz@qb=1^c8}wRg5s{HOLzBPXCND%qxblin z+2kukvE=gM9h|ZEN;io2fhVjMc`6zbadU1vE5wF>{<)soIkjFdUd1dvGNK*C+B%Ii+3UI5PR|_wu#)?FRf?NVn;eu&3cl^pvJ>mqhU65HXbN|NBdn%B-tE#eK(hWW{Rh6dg{j=4 zF{+?9(5GC?+`bzAiTUz{Q0avVF>dbM^7%4%_yP54tXt)bi6eDmLQ$c%0gApZMm6&l zDWdd4XjKy3dZwJ2I@1Aba3IF~8?!uO2I}jqETxu(g4hhyZ|zKJ2v_X_-_P==i-;6j z5f-YpFfcYKiE9-Zq&ZZLm7ST{FG3Yx{|N|K?`!kv8ONbsR#K;us8)_UFG?!L0i z(sxFBc%wCFZ9mHLPrp1ho(j4BzLqmwyA-Zbq{c1Vq#R8J^R0k3plS-akpT5flv~9p zr--~fSM`cf^+`Qr$KT5b_W5=WO-|mIfdfCFf(?{eQp$n4pXWDGJ2!w-)8PzCoF-Z= zWe%X+aaB8W_QJW9@FW&-Ny*cQ$tx%z)|rnZl_4^t+CphfQbh(0*Y2wQF6++D4y8TJ z=Pjbk)1MK~1ZBJd0ff!ztbxI!sJ9x)ngJ9FB_N=;;9m1>8)CKr3%D{c<}VQ%x$gET zhqA10Ysxz&9|yiFTUp*1p=YfE3@iqKo12I-;p@lZG{-NM3xmAK(Wjb347xCI>~ z(8Gd?=cm~LR1)!UlU?#WO^}KgR7l$QHl4Iyb;pP2s3!j#P%JNQLdWVXB?8i%U_tAnJbH_2f5%R&&vY z>^6%+X6F1iZ{A>J0merLhN-UG@3^*qQ&2c^>{xw<+)5^*;Vcy>!YHYnvqSPgk2!Dw zY!xH>5$r%EvLJz^#8Id$pYYm(8q2k(6FWU6RVPb^JZw_6GS`CQA^ZOD`7m5BdW(t?j$T=@^a zt20wm2F7izt!E2;a5y1u?!}GmpcNPyg(l^$w8AJB7M5`3G;;9(E*JrlIox5u&?jd- zE+WO5;Gy{BHyS7B=W)PzP+ter21tNpUk8!HZQ3>@A3uJ4Y+!d^bvMf&f2JZ7=+r7XAqnZ8x#Hs!y*%Kh99O#a8j#9KX6EW8 z>1q}(>enYpOT+`doxj03SC1Y&>aE7_lAqV;^78VsGTef4&Ys5mOel2$&$amwq+~7M zZ2{H^9H)rmm&7R`MH$Hs+@VvmUs%KcRfJpCMx{{s1`b(%CkH%d9-yuZ)En0KI##J@_#J)4)8hF##CVGdU%Pe%a?> zs2}Jw+u3bhrLRWQz*4L9XuBd;h;rKTspW8a zGE^@;VI@u^;i-+*(1OFGqoWj?>Qx&Gt^5;f_#O&}$I)*4QCXeu+6=ioXY<`hIfmUo zZpv>&$Qu|h^cSf>?$K9UtAIpHMbyO1&e`L)$oAD<-C1`|;=vr)8dFQ=P-dCULgkma zxyQ%H$wQtCb_Q$kIZsD#sH@F&Hw9M^9UB{Dy1Qa(8t<_SF5%aqjaIkfQUQd?sb9!7 zhg;|*0+1e+K3xOQIr;i{AS!}qt5^GN28a@3@VhhJQ#etaXvnB2>!3b!(mfy{Q498 zDTFok51vG2D$kKuz;oeLNV)<+H=Zc1Bz5j40m}`V5FK+8taGTgl)96$OQD z06NU}lKid{&E@Y$Wu!+{EvHdbs+U!upAmXUfw8c5`YPAP2SxHbMlKkNGkMi=P=~5+ z9R{mhfC;+K*_d(w61Z`{ay+)vDZkT+^aMhCKjdj6|M88OjY!%y-)`sHxvijJJ5(VF zu+LPNYf>J%89gDp@K7lAz{t~8N*G=SajE+H`T+uY6{5HbNI_)o;EC?S*(doLb)N@4ul_Wg zWYRpKP|J(4;r~&Equh)S6J4+T_V_CW?)%Iqwd42@{v7j7+?4 zR%E5;+IT}4*Y0O!nh?I!Zdj~TnGT?VTs6BLQ_dZ8t`X;%xa2;<35NKMs^a1oP#A4# zVZqD5!0?c|{qUxuVq7%+@V`%Pg9zI~ev`}o!2?n7+Dgx$+}Xm>krK5ajvM>{$ouTe z=E@A2hxRdFd^j9kNW1je66L&O!QSuL*$G;<#y9#gGCe!**24vy@i$< zxpyizmmWeoBPKrH)`4F7b`8(6X9jO_y>_XxZu5g18?YP1M1|Kz!Moq--?XFlO|@kY zU`v1s&ov>X0+*SdWsgc@g2Oq}Y9BjSTA@4YTGBVGdV?D*cM}G*ReNS!=FCsRi7PEq zMjM`ng^5ej@CfzNz6kOLc~f*jP+z5!ov3&WNU4!?rwIsBi3B=rF9L72=wk}Qp% OD*djmkbB+i&;JE>ux_sa literal 0 HcmV?d00001 diff --git a/_site/assets/topk/eq1.png b/_site/assets/topk/eq1.png new file mode 100644 index 0000000000000000000000000000000000000000..c2dfbbad470e2f3f4e4e5e31015f20571c283662 GIT binary patch literal 23479 zcmeFY1y`KSvIYu-;7)==aCdiicMI-=26u-c1OmYwg1fszaCf%^cXz#S_P5VDd$0Qo z?pl|%W~S-s?zg+T>UpZGdcu_zrH~Qu5Fj8RkY%LBRUshWfq`-j91QSI^ks?;0s=`M zBqpXTBPK?y?BrkpvNeZ*kPc7Kf>l?OS;_pYDq|H0Ck@{ot_lq$4WBK6_Jej2PKG$U zHyKU*c(?|kLSI~1e@X*gVgYJUS7o7v+HIjmTUlkkF9~~0dm_a4(%EOz){`wm3on5$+~&U@}t( zH^Qdz!-JYS&N5`-Xr)KZTVD##;Lx5Ws)+)lHA$GvhZ_FZ>$hrxH~q`2_;-Q?M!DfT zD)1%g#%uQ5is72Zc)e@Wql?^MkxBfh&`VB8B#TYV)oc2RIIyXksw~d4V2WCQe8NJm z=IKSBZd=4D7i5#e=nlpiN2`-o?UI|0(x+App|a+&*;d2Yg!z(~J~9<$_j_c{Pve#< zzIlj%;SIU^D_7TBi*|Z^NjPpUmaM7?$$V&X@t{KF{GyRtpNai(z<0frpWil6O#EZm zS=D=@6^3J_$gNIx9=-+oREL_tGigSYBJ|3r=w=c^b@2LOeod25RlJ0VmS$nJAjZz7 zrq_$(b5kK#nBR?X9YhettTQznzTB+8<<^MDE}mWywymI+pN;AZM)(~vd%jg&6Le3m zknUlD;_uQ%Nz?|{3&9BYj{au|1ExGAkMQ*T+AOj^$cq%>)1U|^8YEY_g^_qkixfgu z_)nZ?D*69T*^iC4OY}|@H?9$+i!4mNG7)j z^YsaGy`J56$#Lh-s}_o(hQ5uIW8u$ul6bUlGmT`f`Y!FTI_exREuyK~Y_LbegvSr_ zLU*OIh=2GVKg-#YG+^_E684N6!B$gt{faUgTl75sfq*)m6LzS>{+mIBP>FFn#%C|O z@}{^X?^pYJ`^ZafL8Y*I-ts3%_iwf>da(?QMvlM7nM^M7g8uYqXPy#$SQ}cF8f>gEUNUqkJ@`ror64fHh=}_AjuEala!D6V6};^` zzHXvPXp|t+-|k;f^q{r68MwW9yV)k+tp^A~i%7!}8WR_M{b&;1M4AbQ4=?&F28SF{ zMvV1`t@tyDTgq)2`r~ zpPOr%yH&`hDV4*fRj6v&I1bA2<~;*}*|G7GI0Fmv7_#m13G>(T+H+4uv%x4K$$(M7f54(W7(EK$0pw52%wUG#5j18j zW==LmR^IPlC8H$ya|v>tax*0DBnA4x{bb+OA|oP8zoX*1GqV)DiyIgF0F`8uuH_bivxY+7&hY zwoB%HRR}e)+T}I8CO7fsqnlNi+6iTtWkfd4N7Od?Hh`@ z4LS{_wx8|1kIRnx7su*K8=qz}O(q(!S1PN8%-`m_r%2}hNQF^`QAJTk@eh~`cnz#5 zx)d4}7Aw{(YNdyd+wSS@xsGG&D`PugSIgCpyw{qY8_?Kn8L#)nF;3oKKB!tYURblT zCukIH^lJ3(N$D{KPkdTue|~^#ML!@*jOOOptZi-d&U~`C`%DBT$`#o4uJdTScDeZA zq3LaKk>f4lF?L~n>%}tdnrF5B7{k?uyKdU%e{FyBy1jKob_Y5?xU9b#J1@UzALld~ zsKu^TdP08|ec8Pyy!Jjj+cNy!Iq;SR(HO8Aa000$$rFX~eVPAIQdCF8Xs+rK{6Ww>aZ|@x2ni3q{Hwm{2``(q;<2cnz)k!5U=gElAjP0Ok zclWu!pWEW2@=&7A_+po{)teJ;NtvQBoaSOxvu^gf-Hsa)IU9L``{}{)S=Y~jlBt%m zR;ue`89F~E7)}emRXi}~W6pq1pblM4rWve7<>)B}Yy@5J8b(E)>?J7`t)vML8dZW65*ULb_ zCocM4Y+0-ZhD+T#x2Zi=CSJEM)OM6!6NPdI~VO=!(} zO}#j(7nt(ay{oz7Jg(i)s?DnIEJFvSuP@G!+aFpf+Sm&*KMLE(36H#PfZrN!e%^E+ ziuH#j5)}|J@~ryj-AkXTEKQx7E)5XXDcZNY`mA2iQT1n9@!|SjUZ0_NeiG{Aaou+L zeGqKmcK&MWpQiMA>-`4e81@TeLtV7BkPprMboGdFl7Hu~ zR;N?zJ3_{Ndb{f*)#8^6X|roB!#*LxR& z7Hqe(=a=56Mj|*Oe(xfW`|8?l^cR&r)6xM~CH^de<8S*d_b$&fxu1bzKtgWPcOx&#h-}o z6rQ9XwI7Ba)JFF9CmNrwo}SEocnp0%bQy`&)g4b+BVJHghmBXY{c90DKigxBM z#>5_Ww)W1v9s;EQYQYPX|5h`R6921-i;V!Owt_OTn1ho!F$W_%BQvQW0x>Z$zmu5- zud2A@zqLA5~Iv3RGqg^UFh6Yrd_|})h7~ERQc6Z!xvbN?u!E={+ z=5KMg?tI$3_xzT1C3NTSw|60wKpz(p0s#d_9PqzyXgYzi(BK&TyAK4A&}hOC|MOP$ z9uBD(?mt@w2=@YYWCoDbg8wywu#q(3|MmM@B4rdd!nr^YzCi!4dML5O|9$MgCV=J$ z3Dxse3*uY!f7PSmsKWloOoSly3KK)lcF>@Zue~1Pc109=~Inv|(&-?!p1VkW->;DSs zUn72Pf(SJ3rGw~``mcIGgzx^V=l?Il|B>(iFGAt}k7=P0Gy)=``JZ2*HP&;qOSP8K zNVu%xkGCgB135Y{{~$s(+ThSojFRAkI3AZheY@99tf23s^jwLCSPa(~bfB@7=6hTb zz(S^#j7Aa}F86bzmqq?$i-oO1Sy)<*l_?L`^pSEJCF!@ z)nQPH=s9fWT?Sxpz+KGLo{15F$>&vft+#tiL`RNi3VevOX7m*mCyv71zg0wTqUK|3;6D_t!rUY9|EgW(pLp(s^8lSBurCQiS|F*sW&= zad(dB0<$Yu>5%|zk#-0yRx2EShssZ!Xzq#NweGn$6atS)&c>!jt5LH35czNX$ss@A z%km$N#AP#kzQ4fmP7B4GEmo6CWcW;Jw_JCT5z;m3?u7OA?|KWnwuj=(ulJ|=5EeT9 z{XGGM&-1!D>=QJF{$~?ZK2DCv*4EURoh;WA1?kpXha33jTB5zkLxcUSYv}<9C02N! z$f!^I@_2KVPBY6Eyw(3bE{w>FT0u^s`XAseO78m86LzvLlf-0rniAutUhDVbHT8u0 z0&s<&OfA7bgIs5ZJCE~7-JY!M(PXcSWw)2d{2RA_ngzatMUhTo)Hl01oXZSvaX({x zczEbo3j$>Q$MRnw`Lhhl>!HX%5-*L{?U?$L&v~H3cb(%J&wt23xCH)qxn8brLDl!W z0-a{rkMRt?fhgH!Z>XNUDOS_J6L@)#Lc~v9nBkVX|NUCAL?sto_PJ3pmCNBYa`WTA z1uKh=!(koIf4gX!S!~~aqnW~H8D*(g0|SQ?M=|09@KYch+=mYzq@Qk21Sn2s3ZwT0 z!pQ&4GG7hcvDJlo|$(CPZ{4DL@b^*n=Y@amUBDs6AxS z{G_Nt&sCNL|AYC;Ey9iVYuhh(8}3xIQLEP_1nqcKOW|G2OTR|{GLN(=Br;U}-qhmt z<58uYq@?6A_4>b=OPBzv2aJq)(#F7Px7MPO$_~Qdqop~6M~eG3EdO`fR&WFi%(Hu} z`W}BmryzdD|3fq6%EVMuQk076`zc!m3TZyg4q083J>&nPS79A!IHZxBzPf*!5_}eR z=))26YqKQnL}|jn@mjv~Au`Er|aH>r0>Fr1<;yS32B&NnsFPC>4{&KmJ-^~EvoCct1ayKcG zBtW?DpLNi6ye7Yc>VUW1=veDWm;U5Z0vb=}sqwfl(P^+v5%Aeb&>CKG=(OwH`vm0& z2fY+PSnhMRH-Rl6R#Xgt!R5ukC%H6EyC_bYEi`HPK$$qop=1`*?EnyTWfeH15^8#<~HcH=mP-TbDAeMaCa~x z1%M?UAN(1*v<{?zu+Qtm-Uxth$E;|2ZTb4Wz@{{PDcjp?$1_!I;r5I^caGZM=Tdt?&PJC?In=bnC z1s>L0-78%m4+>ke(lT)bve*KIYoJGiJ=VMmUDe)zLR@_l#vY%|JU!PZ%S_J#4Q*}h z%{rd|VOoUk-Q5;fJ^!<@T3fYhoNZi4Xjyc?t`ySPz{_g62AX#>NYdNCpPrC2=xUkM|&Klh^cNuOGXjUs1?XpTPD*c zZMU8OLj22|d=pgZUxI9x>xNE$@oQ2rF)7v5)}E;7ZrogabUmC6&j`5#44HV6EbT36 z7tqFp`SQ;ZuYT5EhsA|vgFi^#{D%>t4oJT2Pm)YYX{G(P-TM|(&R zmqQ!)Tu3xl9CbuB_Ys#5OBtiVMFj9PoB0CUoOpn{_;@{|#BjBUvr&U()>?8$moLF; zsm3pqC=uKPphH+k`qn_S`-l;*G`z2$8c*=K@~+^AFhB`-Df6w z_npLpn7ZD!j=Od5(>mRXzupZu!zbb&Z~*EskNMkXENr|0h)b2DJ^C)t32{IF1y%6= zXZ}f(XJ&!x`R{J0iPC-SXaXJ+4%@}amyEONe7PwZ97%{kCxm3W@-lQXN$p*0I(Pht z?^N=87KzQI6bB+EKTfQdp}}v@SJCFYw$&X^r+gzW3l1q2=>l{d$CgThkNaeI>y69x zHqK|lJY0u0K6ZWIJ|-Sa4pdNib@o?wzUn4cKCzZJn+=v2w6}at!e-RdK6Emj;H{le zrp3loR|YIuzIMIRP{{ph^2Bqc!R{kHx}nrhA1FMC!#n23 z&It2{CS$%drUSz2_2KaLB zuQTw}P9c}rdm}(8`ja0@gwsfO;byd{HtsJfB^AQ?zy>0~CksNB4Se!n<$XG?U2EB& zmGM=$M#&?B=DJI59huJ1!~m>XNx1UY@aU*ZBb7i`k=FUiO5+4u4?vIFyVWe22L(!5 zrzw;w?*Q}%;o=-lH7|ZL_jMq;`%!e0aQJg;on<8qLH zD$ji*>h7miX+X&J&7gnVrxCzm`1W|3Q+Z}%a_Au2E-*6w?!0sPnb*#H_&_tbeZNqH zhUeWh2i*ywx3>J7b$+dR>&c&ig($q8t<{y`C`k1_FHd)O^b0Q1iIFC3O3>2=zi(F@ z?rh~Eu$dUEM+@FjmwMlwW*#U|#ZV|3uA>5^IK#@E&S{rwnGeuVS=$li!3N!O^;aTS z@&`IJ?>y@DTBo7$Y&UXKBHgq3trh#$a`}i^uYCPmK0c~AN(I`PyA+&2kM}{=U3I~@ zCszH%TaVn>s)vYHmZ*U1SzFQZXnl14rze}{he6`mY<*1!B^k3`omC8#5PkZzdyVu{&_U@Ab>KXQ&sgI>~+p``|8|YsWM^l z1K@Q*NP+X&90|m`y8_t+TE~rCqE`R>PAwfBoeKu`@!}wBEe>xGHqN1Wa9Ybb+u z74f*npEEOE8dGmU1aDZL*Tef(_I9bBCEVu2lG0~=kC^!gT(X6c-MUneDO+3t6{WSQ^ZT}X31J?1)+xBgU>|jOA&r>qBe2S%_#yJ5CiZwZ}9C=6d_-)p5`7dHaJkrz_ZU! zxI7Nqx2XL9VF7o0P72a_<7+Z_-5{qqOR3}S^~nVF#~mGCL91^HuWLzbQ+F_|T)p+& z>Yti6mWg%rt8}_2x`IXq8fxkva*WM+^tDF1&`~r-c+4zZhu#HCO^&%Q#P$<|&?w0x z(mS(iN*|qv>IJlBs3Q)jg-25^pczAT-jgspZA+)HntyAz!o(>UJxhR}-Y#!k=cP@7 zN6P=)XfO3rMoM;dc}dR4w~`ThAg_?=b1CDV%mqiRT+_9aWXRZJ$4kS-r6u_tZwD#y zaXD8Cl?B#x^*}zl*W}dOJ&f)W47)l&n42V+?e@`56a2JL2b!7g{4n{U-TTh%N~Y2T zvOMGsyL19PJ;IRDqBbqI{W6$;lb^oV2q{o6$P4iUeYD`G|iVxtZL0d!e(oE5!rD z$Q04Dqwj04fUoxP3e>lLC}?%ucH>*KqCqtPa;E{=R^|U#UQx!0yNx}(1{Xi1vzSWa z)Z!NN>pS*+JOvk_#-f(EFQO3gHa$>3_9#yo`!dtP2Z|#kv+?&G+SK4zRhtfYC3@~< zxKq|hkJ1SPA-MaC8^NB%y$aCN&xsBSl7O!2_umX8OID%;-3sVIf8Ap@={*&|(@4~b z=RRq2-1;`gw{%hxm+}ULG~eFX>=;ELAh=!r#xhSb;GZqjUIwCn2;|bE#}>q7ur#O_ zNa%?6nDs77Ng$yQv*I2t;!1{)EJ?-qaS-^4`wh`(HAy&>tk0KR;YUE>;Oll{n0 zie+YT$5qA|^jtIG*w6AFm0{|Q$h^Q2_}%}p{uKf*(hVR_r>Kzp;F3j}0z=3BruCmtY(@s9A!#&-K!LkewG6jq5DD|^{eV!(M=d=Tm&(xy zCf@_O!-f@m#G2;=CI6C!4Zo+~x{6-df$vbbg$ccntNq@cc0Ad0nDlF9M3S||Y zuaAQ*>t09YhY^O)Cfg1Y4vekGHHS?84{|@ofN`@sUT-R47o14p;Gi<#B0mWkWB}e$ z_9p|6Kkw@oO(H^1U0Z*DUNMPa5RGPrv2FST&i3|XjN-qPc0KQg4U1iO-|)4*nbSgD zE8R3{@H0U=Yu+QO2Y3+ayEpTnm3&WZ4Al@Bn||^hb z&}hqTm&0_2pWg1nya4C><;^>w zYZJefnB;X=>hcqHX^ZyoY|2!MGEw=@0}4~xiG}ck+4rs{g#5EFN2H%e3}2^bVkeaG zvGiS3l+NxghW!p_i`Th2E|AZ>H=Z&JQzGR-H^c`ZO5UQL{i$5G{G_lADWlTab!oZiK}p;1mqm~L@Hy|RF)li~OTbj^iul_R4VudQ zaKsRUCu5QBvE==y`M?at8crOX%T~ER*9%^Ei`57j(r$j%+w)%Lx+nZjV;2;n zhvDmG(#v}`MEhiNN<;=^1OZD*hW3@1#(UN24EJ?Uk#rTt?9+r1!zvc)UCD zYC%5Ie4yz(_94*;E7x^m)N)m{M+L@dGT|)F7rMiy1q1Jglo!A@ZqXo;Ew3x0H?*Xp z2o4VPn|}*-R<_?R6Gi?cdF+sp6PhZCSO-q@KHyo4a~Z_kgan=H_X=KNYcs9S>ws!* zfo{;xj%z1wahcT|qGm-({XNdw?=p)`24Y;?QuD5o47_OK2YOJ2(rE=uMG5oVRrn5b zHFjwk9`CoJQ4?egX+&R`!B`;wm1zXiVn}KR}Is(*4M?&jhEmBrH-@ ztN_Jd4s`$o1$?^Cmc@x_(>j_4c$Z%77t4lZm(57yVILAG;?2vq5N7MastY#~^N#}p z8(fhjs%Uo-Z^xEG5jO6*cT4~vD_imcCg(f{OVh}3Au`=F9;mq#F^U?=*!k= z#x;cV3aHN=?q@-W_V=G$9#0xp7`xzx6!C%_tAwuq+CL~ZG}Ab#fBYS#0W}}g^#!5( z>eNgFBT+0jzS_7klK)-nhUcfp$*|`Qqrv(|<09g6m{U-FqOvZI(0)m2#U(}Bc4(eK zX-~*Bs}WP(q-b`X&f`12iGpze?8EK=#}j9J; zlskAH--Dj!7Bi$fc9?wrAQH6}b51F`TdvK{5Ihh{HldiX7!Pwut@|eQDwfhjY!#{G zI$M1NOKm&cY0oj`Cam@F4LEOYX`D48~Q)&CJsfmdL{>FO~Wscif~?ZG5SE=CM+o~3Kk+g&LusxjY>c(c?tdt_?0?$*eY@fXvh8U|2&QGM`>n&9D~IH3Sf!G2Q7 z=(z>d&?TPAf+eEYiQej1;JZ7Ro|=GWk#@ZcH*T)-!7%uD;~`Ij1CH1%%vi*qk-oVKo7BFNG%CWLbp~=FtWi*ShJ;#J0nkpb_+1H8MM&J*2Q{InMPj#HYF}G- z?BdT}SNN@?Zp}rn8Y1$m6}`iZji=7)+@*Qpm{i~|94Ds6nc}A?jWTI@Bhh0;ZhDxs zT;H+&wRg&R!=_v<_@Ll(WxTxTHA(|MrW5v3F*_{DM}W#2U$pnr@65j#QL)?UWXmJy zRpIpc&CYfc{DlXo<^s8vZW#QK;XXw+)SDih)WS78C9+84TdwVj_h{b$QNRQO9y&P= zTlLPuILeq3q0}?)q77!9=i_&@>r|c*f~&H?InZRjBMG9M49`DZMEmPK<{;t|%;54!^2|wl9W;m!Y==H-FTW>MWR~Z} znsCL8U?mjnt_(gh+O70|v9U7B$&^uq(|hNGRqpvUw7fes_k^56$v5PjtrcLIp7bHeYoHrol6cc1uaZ}2KNT97v$?8TJ|$@7dG<6`4a*pQ!1!`6P;huns*C4I47bogM7pFF^s zRA2J_;~q#MYuW?>^Akl#pS!fiEcPGTI_re#Tk(zv?-vsvULX{cV4$5c(wlpmQ&h*G zvhCcoNVO`alV@0ScANS9rLWQ*hSgF5iOIyXIj>!YraZNRKRI)zGIGN*V9`nAo z&SKAI3IK;n2gE1wq`BQ}VH9r=58Ii4fUS$H4$@d4MyB$DRYormW&4`_v>PNS?}3~K z_;GmkLzHQYjV+lr{jwm9-AT6JwBEt9GEsWJ3*1QW#-qOrI`Lx-I{;7vfJkN?@E~JB?5J%_Ge52;*rXSF+h^ZLT&@bLKDY>V_B^)a$}jM_3%c?*|PoHN$WPs~Z&Hf{OPhWlHKmKYXB zEzzd8oU@+N2d-Q(T61_g&ES*{i%bjh=hdlmJ3#a5Ia?utbeXD@=3** z^70r*@S+!!#I`%-#`j~0ejdJO;H3+{WHjLId(d0-scP-q(S~qd0JrT@R zqH(G3`Na+ijsV}Mr=Dou0T@UyA_vrJt(IL|>MSK;{K5cEWgcgDZ_CXycb?nrd+A|A+Om7_4ZH%Z*D!A&qb@ukXV+*D++*Z>fjhq=xwrheQwAvA~ z$84S;{LJA)Qd`@#HIIHK{<-Q}U9N6oeELzx{z%;B(m%Hhah9p3%PiN}o%a&TdjLr? zy6bzB{Mm{YIyJ6Lhw0?a6#1!w%`t?7QMTG&D{Z^oa&9fh!*^UIcI;tesyoL+)KG8j z2e47@Zqy&Q&&f8L`mP!dKz5r*w4nrdZWu3>WUGi7GwZ&Wigo+EQ|j%yP$;y6?~A_k z7zf*497i{hs5@(J_!&Ru#WSR0np)$P$s(RH?8bn~t)tOo`;&ROK`510)L=1ED(3en zGfPF7VCyMp+;KecuAdX_U2M69+{+p>mdcIG+*xd7Y3^m!S@k{VB6^ro5jPa0>0pJH zt`qi|P6!R7Y0=5nG#0*B6rp&~O0K0_n`C)PrOWtoWQ~~z=V7K+<0%rBf5+=O_ zT`V;E*R-4>c)|k*gA!9D2?&$^dcRLfUynd{3k4G+ZqQTZS%6URkO_j_Q$1@xe}*~W zYLmuWP30xctBj32P)xne)`97nnfO3qG|Hwa&QnbyXr>>c#&(y1rT?aSm+3B;Yg$E9 zZjxW^nZ*n@7g)W~8S_V8F4~6*fqaUDDc%6uTsq@aCB4YX!S%I=Ofpg#=hrSpdH>Hk zhU{t8pf;VNsa&bj5ysXspLg?-9N)Z>yrv4Tg+HdnH85Qhq8_|AI7f=I`vQ&$yC!Aa z78ZlW)#5HjqXh>X`AD*XWfApN%}WHrY0DXj_)BrcBn)N5Ztq5~xD~T23n@V80-U`HOL@OdA`Inir5oLDdNa=IEtaah+5~xGdu2 zYs3U?*10SVD6f3{R}iJia3Q>5No`r3Sbf#^Zmm!gNZd}g(__0vW_#_;xnumKxa-(T z_j45$ds|@P_AS))vJ@DxfyK%Ia1%X_y+#sqzQY`ZY|?yrou~9F)f-u zxG~&RJeMG5`W~ee$2l|>y$J{fByd&^?Lag0+?^^hXC5wA%GwA#j6<{vWxQz-5U>h0 zEB0(U&05IHuKanh`Q2pc^&n`988w}3i40PJ4UOxehn6h+V>hjf;OAd+rr|iQ9ejoB z?N>msYjJ{^tsUd)*r1WH1rsq*OK7{Gz!nb)tJu~}JE&PtGt!I<(^IiUu*MeDK%P

9I@Ha+yIJZ?PgP8GP9Xei0jW)#4MdF{>X zYFW~}b>JStrxL!MO9LvU%=BgP!q0@>1_|+YpIc3_J!$N#WKaCs@!gvJYu~x$mS|9d z4w?(aAnqOekDY8IbH-J&5K^6fqWQ5>xC{NKhIg! zRNbjGe#M&=PS;w(;)0ej*=xLd(!mT~n+$&)yV)MY6P8B%DjL9|ZX@z=q2VZ;E2iTT zS!@wGXN-|kvmfEj%daKXTFdLe1LS%5F1zI1HA1$eH-*yfde8@A@0Yp~>*x9HA37==`Y z%|XI(SQK~ChDd)2F37HMjmi$*4k?fpLhEPd^Wsg^wTk2hj>)x_q%<%Xkn>ZK3igAT z{qv-+yvryBCojnjmr`Ai^y1n$aq|TA)=r7^szTn^Q&M3BWU53P?VL<98B)uz-l;6W zZw>kQ48Eb@5Lxq;6nG@|=6}IhLagFWQ2#|^+qmYoKAA1%D}(5nJDwc+Mc9ZJeSklk zY;lQI(bkB4#w#{xX^t~A`%*!Fn%=Lrj%}n?%h1=gNYP_T*6Ewrv#5JaGAE~j-kMPk zG7-T8PU%nTkI=2FEfeTf>N4%Wa5hgy`V3&HmEJ30m`Z;t^rwhH5hR%HhHF8}wL0u? zGz8yO=e3BxL{VvKb%g}b+1U;i@z<`__P_@YDEVE+(GLY%9ubMb4#6uxI7ML^Bh?05 zq!pek{Eip3`hsq>hX`w7MN_pd_B%t|mbt8MTDE&tOo%kvJ|vM4WqndH7j^RnpOP(n za(EGlfa06;qf@6o%o5b5QvA+X$>SKHCD3fiyrV(MX^>y^qvzF+5wOYkSt?pJvm zF7r$q{c)JQpZy@>B`Xtby`0;}$VZq3Xh|hGAG893C=dQo*&?V#PcxnPZgCSK^TNHh zVTSX(TZI%>3iYhNsb7igG!0rkm^nGmH4?YCkJIJE(&m=Vl_aTfdqPOcb=9wv3$BfmSCs3A$d1Bl9w! z_Xj20ZD)&Ah$c1*=266bmqCR~67B+2rz!eatx`JSO@NY6A3$zfxgAt# zEbPn3YdB(hR+wR5omAK?f*Qk>ku5|yd_UL)=4H6@MYB%S-wjPRie^mvG)f%(UIvrP z)Nc)!dL!i%jS^Zke&oo@nH|A+YHC0<#R-$Y_GOP6^)QNx{HPNyyw{5!^>}1a5nG>L zw1<0?GrVI!)a|K`8|=iyXol2qB_p1$F%!mZQu09p?l<}*$!}I&-7;i_WqTVI_d>>U zLeq~nlca*O%;jI|wT5wK?An%unMWJDu}0C^wPXawa9RvPtI+G`op5!qo#VoL=fVegq*a10xK5yF*l7nPMhHT#+q-aT=q%UU zc@CN3iVdRaKosp`I5|1L+PV9GgeIZ1ZV8Ev@`b3VY%7**>-7H7RL2FKb^pMVoc|pP z&3z)C)j)99tna?Yah-X&47tDqUbdqifmf)MJGPTQb~Z+`V~?H4HL2M_UpFx{?=Zd7 zR>!0&{-|<pGF@(vLaR2@3P=Z=?4B+z#}N1; z%65{rSve-v*fE%m!c!F?M5K|m)Z z$dFu#N}Q4Ek1R_|$+9QJ6r_)@X4#bR0QAe=y)~Vu^K8Rfi6rU3L^Dbja>LyE$^pxG zjqX84Y-%rwhBJU8zSC2?SdwySTRHg)N0J1~A5|*&_8u-yk~)*z1lh(WuXl_Fj+8me z1jYa9q%qjGnu&eqmwG??E6=DiZ@<;5Y4(#wxd^&+)B4lQ@?K4v-EubzXbjI<`5WT= z$8%CQ+^bQx=!KKd@-cQLQUGeQA7;RC&srnlT2k^VTheOKt&}b-R+|bC2><>O9BLdd zLi-82BpC+)9`8`1z_bEn}Q3^wbD;?@pI)ztDrxCYi&R%~>zp$lMOU+?o^` zvm{@yxsR}&Imnk)v^uAstIPPjxvk)CT`6an7U>Hyi>WH&SE&q%xyk{Xz~YeFzzS~T zN-k{bj3JW(KjyDVO-X-~OL7Y%MN#Vi&`O_N!f(a~25G3UWLJ+gOl;K=P0x zX=)r;-UB2$|GY?-rZtVDcVTx!9c~<_uDZ-9h!|<+O?bez12^gfOmvom8$5;Ri(-Oe zv8YJF$U|LQ$oL!x#!PBkL6gGu8Q7K!4++?lk+|U4MEaQ(Gt>5LS<6@a25z0XC zaFPUMmNooi7G0UO(TQ{LSt?5cp&YheZr$c&VsS8FEWI;D z?Q`F#D9N=Y86vuAznj7kj4fu&rDIOwJ|%Fkw30Fnnv2$yd}xlq!!kW(3<625#=K`( zmx1?K{@xO*N!=o6nS{t@jKBN~q9#Ce%mv7TXD+>jn|v5nHsm|+dKc^m!MO$_)w=Gq z9u9FqzV)cT>YGa{;_nuyVw&)taZHsOI6d-!%Z+9(R80Q z_9XS94@-5kDCRqq+knxr(Y_Mz0l82Ehp57gKeNqn?My3>>4x#FI*rQJcUz+Hh+^Sq z&}fH|c!!o<(j7>KC&ak*joA~(f_6K@wM}quMb?cCUZ)@Zh~dqK*MS9y0Z!Nl0vfi_ zka`bE6(lBI6>KzJn`{D$?8@OBO@=tgN^yS7Ne|?GL~v!sHCM+2J`B#H&Y?FnEYb8; z3?;rMTaLzvK)CWI|Ku)?`i!+;`%p_aU;+aoKK#6$FkiqzJ5l7#99B_Ct>zwuMCtSO zR$zQdpH{XZ1IPNK=mUCJe+VIcGN;=})!G)HxpMv+@npB>hnK-)2z)>?5Wa}sux%l% z-6W6{47c!$C}hIK4DV{odEcAD<$}G)+-et57eURtD;C&4>Qo|0$}O}GlTD)z9vP<2 z7H`Bit-lhwxbd72sxJS+2}WkxpgPI7XHKjO6AFgC*?h2%G$!Dzi}M-8{|w!q#QT-d z_|7}>%0f%iSMLOhfYB_!Wk9*nPtrBQ9cyp9TcO7l6XU7KOWn1yJd(r|>Izkqtb=9T z)7{5C5)`QvNp<>(@px>pZ*>$WgqCH4YD*lw|Hu1sfqCMt;iLWzIMc7hgGBEp2dPDW z;b4iJ`GZ)#i?D@ql)u=oI`76Ib1o7$(m>gK;Zxcj6fCXPjYyUwVRx%UAmj-a{U9~&T7s0C{d2h}wrrm2 zj7b|Jl8pdmN#J(wc&?}t9UPNiUU3DV9FTJ&%4&8dj)y&}QKxWuObEU2!0fk(o719b z*xLY=pd57Fj}Cs8=25X3+f1fq?Bv{tPVK6~t%yAz?Qa6TDi@B3DMV#AVB$T>pqJ>H z80OjQ%rqphj4ns`JiyiFQO#xj;BOUM?o3_`YK-;!-4Vk}v*Wy#pel9KEyd-g{5 zvJX>4jY-*;P>HdnB*OQM-tW)n{mb`1`26JIao=<9>s?ps11!Y@kx@;866gyxZrnN<+8M4F&ONIds;QU5*)P?9ewB5z z;mdZUH1$J!;S{yw9>)qpgh1~B}V$f-5mECixXE;T8^Dx3?&T?*85#ytP^2!x_Go>i!%1+ zw{9g{VS;iu4wxH}#i~X*iobiJ&13jEe6V?|o&IpfV=)yLj1vs)swAqzpp%ZA?>jxO zI~M)re7K)~^3v7k51-b3xTHQgO#T7sfY)>ql#53 zd%bP#l1uVBuTJ`KCc4;K`#K=z&p`8q2!$(+a5$BGJdw$HR5{(GO*VMg(DmE0_!+@! zqj#;Cx6_#J#BSJu=owbR!7OaroVuc_@LPv%x7(XgUoC=8%c^Kb7P{x z)AmKMHOb)?0J|o%O8VqnlW&-3)|UpS${)fwHG)rV#)s^w6p?36FL->qSUb?e@H|K_ zi`k{0ztm^;)O==teoU@j>Utx@7i4;&u`pn|3Bi5n^hNJ;f@R!IlcfYX(eu$l(*Eu8 zV&@f%YvFp5u4YwkepF3FIau%Q+gQY*IY`}8nPqLSQbLH zABJ-lOK1&3)|Zh-g!=j(^GsyC9jIFgw?Yz^H*3FIQQiBu)i$T2(B-79;f;f)gKXEV z-N~*rj|)J-C1Im20B4No~fB1?O<#7JLfbz7h5aY|nX5+5?!n`uS!J1}F_+i{120GQ zkW$L@rQCxeWmY24^@>_d#LL+uUH2Sx5eVor0rDL5GY<9rKD}g=uZgfM@Up?gr(rmR zD$XWB?2CgvO)#}5k>weE>Z(eF?Q=?nc~L$*S3lcU%|__7}$!OlM4vIbP7WJ(HrQ zZ0(75dPnqf@4v%EBeNf<>Q8~3u(hYPqc?weg#q;!TI&?VURSAB21!yU;VEQg-0~bMh=$g zjJbW6ORo3v)X4i2@|AjQ!q{UZsfcP}L+q+&ztVbm^<$R_*K8;Dkxh}1L!&O<**R8> zS<;1Ogj09UYjTYxHB8=kovj-7Jg`SFJxI%LN!wr7gW-a=fw>Iw;+Td|06$-l^`jY= z{w#Kd$LjI7AM{9Fm3lPnJ<987ePEQ$h}FY*N)C@t4%*1qAs$~!uZ=uw4U<*G55$5( zit0tzXZ)pNFmLP4{KlDv2ze&ZCm~J~l zX?R5&at4vTNgiIx4t^N1_}Mzi)k!N^3gAn%d%T~mt?c|$S0d)$D6f1+jI%-~!@msK z1=>}`npZRQTO?;4kyCsMCeVxTlnw3RiFbe9Hn!55s6JHND_XuOAS%%~9cfxlALY67 zKmsH?txx1?bY(6kqLk}p$6j;T2MJ(xM-Gy9)oF3`jTT4Rccy}74_yP!fs^|4C-1N4 z8DlJ1g=PKqs0j_Gz9R7vm0!DlUrbZf=OuZ+L$1q0)^s#)4ZN} zY4+0_ShDeY8(>QNj8yz;EfzgjhOD`ai@H?xRpv$(5K;@7-vj%=4r{Nkd&Vv4u8OYE zK-e{=ooGTT!ro*Tg1Ek3`ONJ0&k=KEf2e;%X;Uj=uoa# z%b54XT(fARBlnF6GsY9r^}XFd`k5mfhYB;WAS7B@0ovK-o1=#;{MVscv#4qQ*%(;z>Yc#(4+;G zt@(oBGDIOO2<-eiVJc#5Wd~5tEbrO1lUuQmI`CK<*=Y@S&#zi3>t*1WqNSte%ww5~ zpEHOyfCRO%eLYz^3fSe}>I<5DTgFyaBQ5WamzMIAF|>&-V3Da-1yBo&%&ZDa+qq7ZOP)E1-&N1;Vd~QwU^x~2lW6ZJn%d2DN0&O$Ff}z)e{A4YGMfIDgmaye zn3JySUql28r%cyjg9yaf_$&+^GUnOfT$L{sn<-GPLf6N?lCk-gCZljrb*})0vc4LN z!~x3vP4b9?(Y}X9rqT-$P$-@HS;Xc|XjV>pzI`Wc<3QZ-%*d4#3J2N5IW1VSl8q~Q zI6dyhi}g;YUjV3E{^<_sLZ21MekK!3S`_KS$2q{9yn!6=M!;0pXM|*tHJI$hUFFur zTLt>6_;^IKo4ow&~tENPD>JA=(=-;m>_`<)y01 zcd}*yulrI?8v1!oUqLGGG8i3C|8lxWozTO92P2>i zTzDa2g2HqhU4r}dW5wde;Ka22Mje~LsQ?X4?5e_%xG)}>9X~!sUiXS>ZSq~|n38x8 zmz884-mjP7iMftj3im$we-*;m5(PE@+V5gP!0xw(P>`B&55q$MzNUc zp!^?D3|$}8$5|?UaKEB@VM+;GODT&zzan0$bqpNjVHC0ZIBA8U1>!_XG#Pc# zce7CO2Y_%H5an1{2BQ|vxxKJZyV;K{`;=xQQ7_&TMOq^!Iif9DgM8sO@*C;lb!>=c?j->ejo z*eGia{0RD&djgvEz(YvxW2%+?nmiy*ZwAf)BK*RQX1TT+`qM+Ue!qUa!99MTrLC*= z7n%T|=WY9*r;Ri5oxy_6_ncpPk2t6Q*epI6f+MkSHm`t+jGexhKzA;8f1akJdQ-0K2U4#FKMvwF@Wk_X zuc*d1R`90dS#@6{aUphf0Mgv5QzvUEEhRrI2%rh9DQobZZGfCV;sUy4C7>uy-7qf| zGI~_8G4A&^N%nKBR4fCNB8w=*WF2QG#MOyp!97h0V1`6S@Eb!Qw@?8Y6r5yUj?0^z zQcL^bPh^!QT4-$?zn3Q6^@v^ja3{mJ`}z`bo`cmdc^Aj{oUC9Sx}AtezP%@_-8R*c z#H!U-cj+S{q+;n;?-#{^R{3Gy(U4L07Rqw3rojs!-3uWN=$Fj+3c%nZ5I0WE=}eFs z+yJ|x>TZY2ar5N73ErPK!Dj&L@!;IIti#vZb_RinmJe67?<(ef z??CbL$EBQmj}}e5y;WBH(4gednQ^J>MiR8JX@OmFgqM58NX1&9#eX%R=saFu1H78C z93MbV%_CEd^LaS(&u7ug>d_b(kGqaF5mn21qAzgA{WA z5(@)_S+bcxSqq9Hfdrv>0fbKieB}tnoDwG$D@b>8WWOP)+b|5Ml;TQ@(o9FogUF#X zUx5z7FPx`!x;O{ian_v`K_b*W)fSbXuK4g6rsIgj+nyORkA zkb0(x@&ygB;@?XR-4A6X0xHqGsv#FZ9X59Vl6_6pAE4yjBvED+aSqUVvIyv;{9kQH zC19Jdoa+=l6rBGfmLQoEvfSjZu7@OE!lr5L&o=YdK-R)lR0T8$#%LyMJlzh^R2+LQ zx!;NeApSgtw(pI?c_HZz;Z37}ULOR2J16dUQM!Fu{V1-S+#)I&EnRWMuE&Np!0~kI z)3lL2F`)Ki&@AtmWKEyU0{Bd}5UU_)%(vRo{1t*AaB6x)e*v2^xxQV3-JIGM-+}*2 zK$A+ZcEOjk8CV+gZ-(vd{49TOX>)4Rzz~WLKZm6h>V*#rk6AtrSZ>?frS0jI*7bBE zKm65jREYM@&+Z*x1^QI6aSyd`uh4ng5`=ch!YlM$^B}Q@VH+cwZ9OsI{mb5s00pEv zydG`2pBO8$BO*cKk(2z3$qC7nFivHIdQWQibtXu*HWp|pFV&`WTB@D2bk*sMJQFP~ z4$D2k@n*J040AlDxt>nVGqFbj`i_7eCrj`kxB%Yk4Y z0OjVof}YosZOvIq{q|B57I>!DB!r2u9tAhT+v<3I+f_S zXdh=z%ftV^<)1(J8??b`-&`d7k7sBN{C)M(Eik!Nh6tMe$8G<*xS$0F<&Cv8%0Es0 zdG`K)t|&0P>X%b>|FdHM>tYE9og;j)+vYsmKhODZ3%ULH!0U1 z47Uo7YbOo$|Ay=I-T~1t$!o++Y5@=;9?;V=JXfWGjr>1Z C07A$^_P+PM)?72j9AoYVvpb&Jy&s_*l!T&G3x$&W(p1m=D z^-59t)hjYZ2U`vC6(0MJYGx-)n{2R=WmeRKtI+t%X8j4DD{U1=0Y?VdTCmwybJSWmTbJ|YF zPm{;SvY zQ}o%}nvC$@w-1l%&jKleprEKo*4}&+i$yyYhC0IZe*<T8xr?a*1L?W7V1g5^g-YrYe)8oTr~#V@YvQ zs`>gbrrPH5%Y`{*F}nls$I7<}Vtz!a4t>$}*rJh{SQ18%k1L~WNIn;oT0A5dIXAEG(r;+9=NF}&mK(l;YUmrs z#i808BR3K+NoBUbc^&TWRUKrA$f_1miqt2qq?Jt!)4}h96`CQgEPnzOBgM{QLWcK| zj!8Q~z(t8lZthov^AM64R-KXF$jMs$1@GHLyyB@vQL75NH#5=w0Z6NXGe_&yH6O01 z zPFODf9ZDPm-P6IK5>N45^j#4Y3vPuMv4scBsxd=PV#vnfC&|zr)Ji0!;oA%SLKG>l z$6KKW(%~~o4n2wCEez|PZ$h!k+IGd`%Ri%(z@Q8rZz3QGgkNPpB7n)D9Ur)Vns;xg z;P%>#c8OM-+6mo8K6xy~HvtVf5}Nqyt3aNuu}a4QVzSZ~B;oL4K0{HQ_MEH1F@f2< zZfs}!DD~Rb8zp<4n-A)!dT(_sCG9`uCXy$jhfgn1R z!3uPfe1${(V6^ILhpG*y-p$PG$=}WS6?WNA81AJM0ESgY6Jba81zFa<=K{;K4Jar_(V8&D3<_yX!EU&MJz;hkV!`x(-qaEmc3!aKo@ zNbs3)r3cEtuAMA4W7qn?@{(~<%L#rVl;qXKNA*e6GlGQ(=r{02hI6Te0|QMOM9I;{ z1`bv{YQb;y>os8KU>>(dHQ?u>+6TjUVNHM9*gci}Iot%56FxD3xBm8urwv2IUvJ>> zr_hxtOqRa99rYEKTcCZQRS%aK{l^bgK~*_+m=#6xJ%ki-Lsa~j-ajqgQ2s(UMaRSX zhT)14gSi-n7TQjQIh?p5gGALIcU1tROmEI?4!?u2gISz#7G3tLfx=<6l(3{L#9QxbR0od*1X8>KVER zf4g5tpsu8de5b7Mr_ry(6{FKQyW^APbJjEM1-IHUd8UE;>!F-~8hx{xFYYCx&-~q}gmB@(5(kL_nS2p${*o46`BNpQW z16?B@d_(p?&evS;Cu4)jQ-a&L2`n%TDLCjkzZ*NuZUoSeaxHOvb80qE*dgXz8tb zTCEP|m94)wXf~8uy|eb*E882G|58`lcsreKIMIN&SXnJ%{5aP=Nk02iGK40CHku|{ zaL{njV{lR4>68AaV)=S`^~|twt1ay<=W#q8MLb))YT5eHC+ah^gKyVb#_PTD4N_Ow zcB&Q(zWp$>A!-zB^l0?#P3tuZo*-T3y1Pbb#n_=tj^X8At8H!c%)T|bd`A*Yk}vej zv(ByU%;^~0P0dsHIL}ku?aQ&ng$MhTbAj2$P27t%f@PyN-!q%@hmG~$l$Yj5J16z0 zUyjO;+sAnf2W#c%^#O{Ax5ubS;9ZZMP_J4j?Ik7^3P<^I|8v!Ijj_PYuZSKLc8 zd|X@tHQYH|8~kh{T^yemJv0;odBSiBxQ`U4u`dGYf7?rT_axqH1Noi%**3rbv`eN zNP@_TB~Q(PNlR;^;FW-#UdOegR-;nG(abH*YAhzBGqN~SHG@-3ye<)$!F`~eu9;3% z&X<*#9p6FS?&@`QHM=fA>!v`LWoMnY-j^3PEGV3u~f_A5_#8_TwZ+8|l#d zUyD2XV+dB|sPohNDlRiVV<=?otvZ*D%{J~B-b&Y zYqt0h*rz#tuixw*Z7gTgP-%-gk+(%Fx*trlaJxU+3KwA!xb-a|LRD7b_pFTAcaPfnwv;s^;Xi7gm;Fkax; zc+~M3pBC%Qcg|R|FDiFzPOI(AMOz~-%FkTS&04=$>8K+qSD$m;Ppw*C8?Vaxf?|V0 z=*sSzbojJ2sWs;z{rGpi(4?o$RTqDOH+x|@{LMx?$F6-n_-G&@!~KQco#vJ4N_lf5V&mw+_DXS+ zrBCAp>sF_W^OX}(3!cm2-F@F}BMAbDpyy||tLoYfjC-YiqtZcV1;HGmz3}apE2q2Z zanl-8?;nM}gm*fpOFP#czG4r2E1VUdPJ`~mA4mHu-{@I;3!OMy+Ik)ScyVoevz*+{ z>D{<5bbDWnR!mBADR(P%qj5catunf`J<)i3dV6c^#i!?u-KGDkwg$?r87CWS3QAu) za4Y0E-7{MY{mT#4sIJE#Zr7LfS5Q@sEKrD5_AxM3gghbWLny7PlII2Jeje^9JmqvX zZGzm|TyE>+ZNCqFjiI56pZF=9JUnzb-{0r(^y=Hf*g2K_@+rooXd}Gas}M^myZB(c zDI&e-TasXQgaXM<^&02JdYRjal5R z?ZB&{pak9c!H?F)P6lLd)>bx-{BA-Ne_p{4eun&+m4fWgOPnl)C^Y00$zIty7?W|c zaIvsa2qTe^kqJ7yH{n-)E%DFg;5Q)(GbblIepXgjS63ESP8M4SQ&x69K0a194pt5h zX7CDTM|T@112<+HN6No%@}Kv4ZR}{|U~cDRZfipZd0zuVTW2RB3JS=L{`>WJoyKnF z|GSfo<3HO18)St%!^+OW#`@p)2A2v#{>rat?q+PI@!H(l*v1juLzsh|i$n0w1^>UN z{&&m&bE)S4E@kKYf3EyL&-`gf~^0wy)e?`Si2H94kGi{ z3aa2c4CHLUmp=GM_s@6mv$;9SHt8x9lqi(+>zAr-(0f^kOX?~Qg9`nzc|ij_&!}bJ ze@T~0WKt@4`@Q%zw{g^KgJAN+I1Uq;)IH7o>cDC)h8mm%-0#YW!yHAoA~}nXOIs|R zn+LzQ+H@xzQ&yTy-E$l#gtk08wmdD&rx5uu$)Nw$V*<`X>9~iJQ&<#+>fb#0p&4|m zC}N=y|HZ>k0Y~#1L17+AgXIZ#AG80f^To_vd-D6=nH%wn3M?;Y^$P#r94-{1 z;_(OPe|1WJa9&_}K?-(Wk^km!{op>kb@TtnSsxWvHxd+ZOVukc#`47@a1Hn7tFv>~ z{~Y6LP*;udXkR~`9#hB7F5_g893i{Ih7@beH0%GiMkS#?SEgy$7mAUmR<5-+#yNrg zDbqp5(41XT(C5zWcr^em-L{#3#bwdDzb_1{UxLtjV?;y<_^<||;)CI4H{*W}#D5#1 z@B7xLHF2lSp;+lZrr?m!A7yUD9!+7r%-jt?6V_@{+sq#3T9)LWpT{y`Wvk z*at&D<@hY@ub)Z&FBrox-L^l5JTpe7(_3;FWnSw+W_`mY7wR=%Ww2J1VRMoc9fXV* zUL==sn`E%p^JLKt?iwv98+{S7{B@Jqx7_0{=Or~p~IB^A29uD zo+Pq>yMHUw+j5W;WLnSJzuaumLi z32;*~ekUn)$dY65fWL1Z?-A#tBGCaU#hpv``qrg!@cVKKZ`(uxxc-{V>>ThGEUK&5@)$>&1q(_RHnz>9&5{ciW54xowvPIo2 zT|Xzs|I#ZutbeYz4})%>qEzIEKW6q#>I!~0lVY(Ca;i^lAn#d z)VIXaxJzR<3jVC}Df;SokhDLT&_?5${6XfAQ|P1X3s~f?A|k3%L5=E`Yt>^$;IYo# zf9yQ%!%_$}iVA)=V8&?&uL@H+j{ztrs5 z4|8n_Om2JmDey-v8XUa>IL~kso`sw2n&#?xBrdmDF1iI zlwJOvC=6cVA;-A@VeFQ*(Zfx(i!*odiO zpMnD|_=uhQ6j-7TeYJ^#nY!C4)(`#V?VW0yKJ%YQ15Z7IxECpfL5vlRhx_#=m0J;hNkn#lHTW-rVW?JR!hzS9X2R#cI@ zBSjVWejN2s%gXjSOq2D2sD9vOm50?2C!?(IeDc?Vp1`w0-b}QL*qEBk{|GU+v-+E^ zXPAQU^=q^?kk$df<0KFR--#3k#?6`O}rI#t3PA_jTFLA_uM9T;2^9H$$c#326Ux(~`<%9v|2KS`qXobl_g{hCph zJoD)ykG8_6_aOem18E^yEEh{(U?f}=W)j_PGfDmRVuSUQ3+|8r!)=I?-q+VpZv!$j>5$l2Bc=n1WL|_%}G+i!S6NRH~{V$=yKe4rz zFLlF{GUrAOWerrS=x@o?0=Ux zmW=jAzYMAhqt3)g5Jdu2JibTFARbsw`IX;#_zd6ZV&to*dS-aT;F#>=rWXEr3k@}OXiuQg1pFe9GMBHECC`t5eFgR70eUrSheragE&vhWBDdr)CMU3EP2RX z+~ne~-!6eAuLv>TP=O^&Kn2`(c;7zv@AA4>>yHo*c#0C~36^WD@4IN-81^#YDZ$mU z+d7KQaDP)b7R276p`yAAlc7g!^MQzhw|ljDMuX8F3>M!im>*73Izvy#1z3NoCdG8P z9xkr|$iXQ{WZ#Fe7KEqER*}Gq2-ZvS<^M%3{p7JGkoEQ#hlr1u|7X2)%H<}ye@)DQ z2~12u@&Kv^I;MZU<+oj2eBs+YPRc$|@LIr4?z;Z`ljQW zW&h>xisMB|7AEPg+4eRN=YMbSH?FWeeTEs=YINYEk)0V-N$>?AHI+N zQ>LNzpf||vgsvV0q=*G+kYdO}ToeU&qCmD^1M8WCLS19><+)O&Zu=b`s$;4p%tcGq zz_HYy=ycEtO5^+EmjGJv9$kt>_<&v9VdC>S6o?NS)_&ocF4y6*m>IH-{U~TkdPnw^ z31!c=c@vlY{SbOU|IVizmWPYRPJ(ZOAn?fiKO??xCyq)->s5eJ`|a9JTw+B5$&E-Z|Ls z*MEb=hbGDVt5-g&UH(s7jiUJ^1Rt)}Z?caDOY_Xw(w$*%7f*MAa52*OE2t z9UlN32!Dkpq&k+yDH&3=QJ?IW` z6jLpbSqicsj>1Y_6a#l6NhVtZ8_js4{+LS1=+E_!tqqthb^kWr*r`zP4^LRUOtRn% z$)NiEh+UVInBFyvg<5~jeFETcy|G)jV&S};YrfnP6TTQ-9o*k zsWbvM(_)>uQ8V74IPei7^Y=tlpu+$@Qdp2Gk2wVIUp5T!5y^HV76vS--)>Z9kX9mT zG^-O}RdB}Gp^$|$SeYcjVUhtWpoJd%Xnki0$y-lo_@`EDc&b9od-7w1weC)H9=VR> z(z&G1_7}f3{X&FX-X~3Fxo?(fg|E(!_$Gr#&go$3yOj3fa(jOwy-MWoy39I@S(&kH zp+%{2MA}TU*o!6|u*{_$V=t;$6G%B|c1q>J9K!Qw8i6cRo%p@Y2$t0PKg--4&483b z$TA}#%Ve}ONr5w@`P$-ac?`+|LG?lB9e3|BNYq3d+wf10SV|G{{%g*SOt4Nq zR(+EgI7})(@9B?b1}^J^F?jge1^ve6`j2;~6A`Gy{0L>*{+0`M#csFzjZZE zh6055#l&*q80&JR{}yfV zZDo5_G=U5q^_v$064#|D*U9|L&;i_Gr0#c$7i~a%|JJ3;UcBKqvC3hT=)ym><4En{ zOzsb?jx~ZIy4q^8OXJ3rN)^o!_GT(-{SA;;$JHA0WbVs=KovDCZ?&fvkN1bZ=fk;A zka6+De0PgK^;Hj&SZbdJLcm-+S!|LE7a~t2=MT6C5U`=yLY~jVtfq^i+`#Iiesg7> z?as4jX* zZ`76%$~3CQtN>d0QwwU6C4L%$jWqsn$-w6_i|sxEUl$iNVqrMY}`Orr&1$@!bkTBhT8H5Jk?mPHWf%_(Q%xKA%i)ygryWW2!R z?Xj-ve!70vi|A`LOy4ftm4PEDJ^fntYWGE zZj#ZAMkSU5Nt*K}8Ph8%7awJzo^CS7I;84&yeSb%Wano_sgUmcKO=#2fvEzqSzC;v7 zWIfF`h}ra5HwhTf^dVXriiGXh3ak9o()~MdC@PpNlmil?y&U*%T#Pb(&({O6@287sF!+xD0Jxhb0 z_Qefp-K=_rw)yu8aw(XBNCHYW^n6NxIz%rJ*lUfQ_vMAl7u%D)HInX$$DW7+`>>v6 zE#kL@YHK`ZUq!tD8qBj_AJDCl{=?0n6a3xwEH>w@7dKN$R(JT!vge_-TR{Bw?O`XY z%d)%ax^6rzr1qJC@S;YnVse?&G}Sy1cg6r!>un1@3*!4YQH%{%Iy|2kOAFf0L2f2D zTxNg4A=D|ZilcX0R*x1#)%T?aO8ZPG|G1^ANCeYp7*67KnoHs!KlOx4|iK4g#RF-0>pxHV&fcJ?!NOyI10)L-lzPRRtM7J;)m|8Z7@AJQIzm`xL`v~2&hdLV64K{qq#BxFI~S1RNP zH=9s=IVE&|>(1i5x7cWF!s`VXsPxQ@xdLepxe7nmBoV%_jI=xUkKd{6rle^#cMU-+ zU5CS{R=~m88-mtYfMea0&TXk2H^=ArOJnM3G~i?8#TQZ$AdaO7H#Hw!Gtt;#J0YA? zQaBp}y0yB^jMhmBLUTQ@WASr%qm#)(Sv=XBue4pfWT=-Zop=41jY}c?(Pjkjh#T>u zao|KnePXVK#y2A}d){n{yTlII2d%7`{1b{`cXbo2Z6>Mz1fmpKC0sbyQOY|vGtd3{ z6mdHZnPZqskY6*KDI^~tty(!i`C5QD8u_JS@zsRI_Klqj!X01=79a?t*w+C$XU2r= z3+%1W{f#EC*vFt~NT#2XAdA|kyY>40Bx~=q+4mi2%RbecP4=ApXy20(>g@DvLIHJw}K?!bo+$gV9;Wt?DylLXhX~3n9gBZm10=QJxztQ- zJ^aYzwa?pZT?7dUWo4bK39uV|@jPe90PNz*PJxnYBuzz0?jMOxmwxx8zvqy>bCAGY zh%;kMa2R!pA4C-2{6qEqsg`RzQOuuCy2sLdTI6quvO&bQUw!=6`7le~#HCduz$l<{&Mm{l|C*dX>Y3}jF1@78E8y%|djc&IJ52*M^5FvD1P)dA} z9^_tR{@Pe{yIq}x+xqS-=c{gslpjL3u4MuSB-O`=g3BEXa2>cKA$I!uaz|P}Eb$p3&se-}fp>p64z<^StL4Ss_h>Hfo1IU` z)PC#`nC}MKhJWhkX3Vdg(YR3^U!RV&U^DpkYp&4y~%fshlvm%8VCPi+i_jg0QSTWm-jbZ5_btt- zy2poG+H?|$1FEi$&IIWhS6<=J^s2|B>G|jd$6jI5Tdxa#rJWns&fBa_0YM<^$SfNH zMd5O}wg7oO5uiN@l1}$KKVCc*ZG894NI$p}ncSfhyhM6Puy39JkY~_yPauA809>x@ zJ0x6`UE}c~;qp!F>88GaG@KZDmBlQB{a*DT(swr`K0aCq{8aiRpQYHtVnH^gT?DB4 z!mIOddO=(SvZ(2jP*MxX~{01;sPP)+xpJ*rSKydV!GLiv6pMV$JrciAtnw+` z=0}U>LBIF$57(72&?+L2eDhc_{F2UXIGHCdrd5l_8x@PR zw#yN}4w_8z4IWyFdF5jjdIBm~d`#D>_57djN5 z5iW`}_$KX!TsKk;Q2O8A>2k?#r@%2_Xn@XQE>9fUd-mo1%{h0fYaVh4q=Z|R{n`dx zdC6r68tve-Ah>vP#WNhds*>rCui*3B@A} z8vaIvIlFyy8>?o`>qU_J>X{qn#R3Sv z=J8$JCotOs+pOmly3`R-UtG^`F{;1Vy98$R8EoWbt|6HXd2y*D{BULDi7VEQ+H%7@ z)5pe!ImGLqH>#)T^(SK-f8*sIR%qSJCHgEF$vCoEzTn;9YxP(%i*fFW3Bf%NT2gI9 z1@A}5hrcpBf|MM@6;V;ywrG|g6?oGt#H)dDwV9&lZ?y zVL2v#0j?g~YT0=lgKIwFbIp zOh&abbilzl5f$z@1Hp7C)`(fxPTQV5kV}sRI0x&!M)6ZC2&OMg(Zy6=)f|3DGENj7rlb<$<~r(x*K&@@{-_%sxtNNP=lgrCcG9Rz z0^AY({vy!LCU0-*YpkN(Tb$)jK1y@|EKUm$s?jmwML*av>xTXy0uOVk)!Ppq=eyJ! zP>MRcxGP_ivAe4ebE1`hU6G|Z3@Hm}g`wYi?VY|YA>DluDKZ(t9)TKyM#>@)LsH&G z<-+x@LM1Gf!#I5VM{8rtvwM8E^LN|yhJH$r78I#uoQ>XBb0QE@d9rNb58bF^UB16; zzS}U?R$}p;J*WlcN4-3H_42PXDz*)!C%j|kP@qKRCW2jmzXu0s zJ<4ck^%&>)W_nkEgQjzF>Y?rT%!~<{!EiQ+UQ=^OrI46^&JV!bbxU9a(VmTaFw*iD~^oS)Vv* zPl&L|%*Je@Yz(RcSrMz5ShIj05jZEns7GHB6dy33kAy=jJtzl+tgJuB6{Lg~ftz9y zI7-N&5bC|Clq%d8skHw?D}Aib=A-4FCk`l8#)?Zi*aw_z3t#1!v2YM#G9z*1$@WyC zERsrU4wZDdr<_IL?>6$il-uepTE{VIanQ`G_hBu>5M6!CO)NEwYG|LjJOf&Og*#oZ zX=>Ze!|fqwCM=DR9NatbjT! zt}wv+ap>y}QtCRa-uzW;7}Tv74Za|=^eW(FYrRejk~HPCgc){TjAtCSokmmZT%o!i zEb&cI0qRd8o%bopbD#%_z0zbS8__FsyAocnZi_dKD5}F=ys6st2EyzSk=#}W%vGsJwWWLdsLDnF0@UXE|A9ZzD1m| zZt=p3#p-F%q^@%Z>_J=8S~%kwkK(45RvWq->COj?+~U#aKJ&_}Utgi&uz9L~86FSa zUg`AZw!O=OLwcnTCL0WAJ}V{PEra<1LqPE*d$!Rjtd0vD&NOne-)dCp`**oeO1EC0 zZXp-64MqKmfgG9RD8cwFjaE!iXC?tmMNLv8kLczP8rA3^HQp_{5Br% z2Z#sxI@^_H2i5~*E6tUD-VjkM$tzv3Gr3#9_x{gTV>>N}hX`_H{gm{u& zMo8W*h^MpXib9WnJ&Ci_&44u=!rkewn!4b_$*6Egx0ihJaXZL?N@QmHUP`lh&~a;H zKR{r5g%+#OLSxtc1TPdcq&pArs6=!d7c3Pyg@az``20?vn*iF28dE&(5yjXJg!N?< zF6L%|R60nR)by>=nJs^(-UeRN5}&_Ms~ay$jyP0q*#~L|vpZmwT10)5#)cIm>gT$`bs$#ew z)&stlsFK4P=2XV^AMj6i-kn5rd^G`;eE`yc;Z4EB+h@%n;y&p6+HC^XoH94I8uVAi z?>>?QAQ`-pdj=Z;5y2;Oi|+b|lU9sqv6jjGEg**bob2+T`^s-$4+S}!53vt;;?kC@ z=@8|Q4l1W|-F6RDtbha>IT_a;&TRgpq)NYs#$Hqbv&Y+&M~>|LJt{q)P%e7Kuiw|N zfbM8Yd;G?n4i0obwr##{XoYLybU$&0(9>I-=AihR&5A8cel(I+2c=*S@WZYWW73g9 z7gue#*(Nh-DvM%1)Gh%CkNm1wabKhgJtq$$+QXk}#A3GkO2&?z6yG^7XTl1p!IXycq>h-`o?)m~a3 zKrGbF;8E7jrOoL{kAyjvvznxLcowy$6LT9ELCH00)@z-BhBU*%blf($xG&w$sK9R# z8eyI{EQ(v+uHg~x5(pHwN@3CAVk*j&%?62TIOV05%GXkL*3nCw^{5x}{c9P*-Z#J; zwu3-nCM6gEbkPvzfFC6!q`2JZed#BXR@P6fu8=m%XFV;ZXbyvbM)7$qh#guO}izfk(Y1J??r&H)0ZFsMzthcj=qGCJOTMRi6xkj8RX}@pR(7o z?iF_=DW<)8U=oL{yav|sx%)vB)S`%OTlj{+IHc9`Wx;0M@cAC?Klj}{kW zNP2(*UmuJ@$Uj14?gQjqufF&W$@xnKy);!BrQ`v4kU*=C4>rEYyeZ_6xwYLpEo={r zm(KC-7wK@EE|Qa_VK364RmirgqqNX=0Lp%vSD@!}(5l&3;A?6B2??%E)+9(sq@npz zL!(QT?)epBMM5oh>ffE*IA#99jbn_ucbA=y&Z&+A`p7||cqsm0vZ#-vD(i)`3K?!qLcQ zZo4UXT0%I-ky`;W(ldf(Cf(ZRF{LZ`EJIZq`s!fGzi+R?2GsOuf@jH~a z58oL&n4-`=yTvWH^b6~GjM5Y*wu1y5ZOd6UL9;JI-)3m9rpf{u>JlgRB{QIYW1{Gd z=#@BbUN-Sl-q7**bB&;e>rnjM%-+{?V>-oD{_h48IcVWtZh zs*}``T@`L{-6A^d2zINaa(s@fdRf0gK{oS(ooURSIi8YbEhoYEjuk~kj3g0SIF1Gl zHB$&*6pL&Xx}lA_KKLk;`p80%3|7OJq1svGi~G0Irt7JN$Iv*eQ#zmKw`5(f`?6X< z*4OIzawUIx@I&C>J?~n8GreOD^cIl98hiCLUI0p_R|dW}ECrIxk3cdfR`-edVV_rN zh$)r7mzzqBAy@JJLA;l{^y5Df+qTvZI>$r&_x7aw+qzk-9amxj?#A3z?VHK(@**P1 zTPePFczfmxHmvw~?)yI8=9F%2(=P(4H>T*0_o#9lCo6O1{=ikDHNOM?!U#}ibp)Iy zs%WvoQe>uZPti#B+(c{u7m~Tjv9Pz-v>uHc47gvtIt(?$_V{|_bF<=hnn{*yyB@?5 zS(=!=Thp5Py0S>fxgFeqb88u;4V&S%S(-obUIx-&yi_o?yjTAIdTe>a!pIu!SOmyX zY|}keYmo5`t}=RG`$15>Tr;;zQ{kp)1n5lYU#D0$_)b=Tu^6(Cf@EyYN;hb%4;L>X zVVX7Tk#OU*c=^Qx$i2!rzuIVzfc(DO4d!c;X z0+hohi{cH1e|9;JEhsZ~Rz+NO;wwc1{jlmx4p^$kPfk_J$8P zMJK?9IhgPjq3RwP#h3*q+!8{8>LJ%{c2Ol90AT|s>Nz(i7_qA^@C74{V(Ef)OY7pi za2xMG#KNKqRiV6CQuG2Mry(c|sMuHF0_KPFO5YvdsW$~0)rLV=phbGpBYAC8-H)5+ z{CjQ@Rfmu$;$cW@3Wgw?nkeA(t7V0FcwxGU-k}h3IKFt{_p&Qu#owi6XTi7fHiA@_oDh}ZD# z6uCcBX}4L=Xa{TsvU!C97a;fV5MH4wA$|FyK6m)fIDMP8c^uDU)@*aD7A4tqLG$R? z{Z>ipAhQJjXT}!LFYid){-X;v4@OMEeML)r7op)p1hlqTwK=BCwU#STyZUWf_h}UI z00HiF43S5yW~O$7ntr8Q5_QU!;z35rqHi^uFzOCPoOf0430acN&)8w3x7XLvR$SUT zA)qgI(I^z6zL3H??JvB2Y{w>T`wf}k{H$J?{Sd-~VqhQ@VaLYunsluoK_038P?>>w z_Q2jtF)9{4McW|^!U8(QK*$PlAf}awye6&20=k392}rq0QvFcmGD2tjkR&o_kU1${q9pVKH6{@8OR(+uYcz?~&Dhxj#?n3^bDma(!+D{couceu0PX@73N<^Y(ST|_SnkI%o zW^SJBEJj+4?}&k=x%E0ZE0;;FDDc1o~B30W^q8r|A} z6mfP~v}y7N?^BjbdPQJJ3!1oS>skySfg)34k$ivSijQSSdCs~{BF%T=%V{4I89M+{;nJL;^ujTE4Z$g(# z`K%i5VDQ-K38y21B5pwZFW_|*jL8|kvbWbs>bmrRiLsWZ*?6T$BR%p_FPy!>TaI8L z3#dNo+rwbW7DebKA4=>A9+=A1s)sd>SXmdnqVP@SNWC&*kR3!DS;( z(qf*F_?NwXBI7a_{Zai@!k0)NW?EaXLDGQ$Wz8O`ZwIZ16`yd-#cm3mq4vQqE;vzj ziUxrJrkk(RO1H+c3+iDhBk3+Vxd$~9PeGTj*t>jEvK-_r2#8P^@@E#i$`wR&N8`nJ zQpTj^dYwM$M092lUPsq=v!Kve0I`@ZPx=Xx!cH>dY`x<+pJYPm$H*pqc}&k`Sw^08 z?JS$6K~KjnXn1^|YLVv5wyzs~B|qVJk{Bt33cDjr(_7BOWTy*q-_{)nW z&wO)<%jvIL8Tbg%i zIrPpf<^38xU|bqaTPVhwIt2@1kHtF*L}Si{+MCMI6+h4skl=CJ{yk~d25 z1xWMTe$Fsb8XVU8PVItcTPJ=5uDvSI_WvIcVp^B3gbxKc+X}YPaOC*2VdSD z#_DmTw)!!C>yICSX$P|)(>dwWO^Fayl$&ud-}lFpQcND1+X{<~^`w&*LW9u>H-qnM zHp4aKDw6CZLW{SZo6KkPfk3jPrICXHYGQz}v7~7BVK2fjG<;X_61#`a3VTK$l z+5?;8Cmq%f{kb2p zjaiMn1`E02<+MK52*wCCZtnWZdhAWCxz@cL(E+ikwbJ43+|N|&km`YIjfCxP$J|If zmD^?(t*-U+>p|P8e2BxRy5xLsFd)E?E)`Q4_&CKNSz!N*-p<^(U&(X>*D>t9l2+^2 z=@0zTm!nYuJ>-c5Z?m{b3ulpEyet?{0Pb;?Q0Po`uVBNh79U|)aQAlCRBAMM0nnVf zHNG^=zqhS%600_3zR&YHc&!ri;jp%G%AO!zTr*4$HVq4cPtjnSGfy6)Z;(JnjFw5n zvdPK~dG89&gLTMB_8aT;)7>04c|4*Ik#TQd{{)~sUrI%{r{GR}8q7+`T1WGc-jPU- zev{o!>^9Si=SbxmMs1rRyg63@Jd3l7JL!MwUfEjElG;l~>iOhbIkigfv!4-uT=eDy zVlWa7|viD2L;rC(==)oAa;4Uo5wKzQdPJm`PsQ$4h1Uz@~?o zY2WugRoxZzg<3tazkaqq{W<&?Pc6&U-0m0bZy)cgaUMB*Pe-KIoSc5fnUa*_SmyEJ zEqowp-*;L$gY&*9f%QmECg*Mj2@O5p`}&S};?myO2f8dxjqqU;epfiQW?Pm{hb@P9$ga>+DW)*^dqAtINOp*& zDPs(#Y{aj=64|TjWw?>_v~8sGuc`@DWS@B)G~v8bBy<{A`PnUV9GF{7lu3 zVX!JJa!rK|aj>o00SfDZ2j+v8+gQ)s~w2WHSXz8B@~f>u$vQkyX$(Zq$!D*ZXyEQD?Dvy-I^|6$ZW~4COh3>%n0wd*5LC)+;6`2?SBTQ<6RKbcwLp^UToR8 z$aXC&ZJ*NuPMqnk`dp&^;_0GomqlQ6CPdQ@M>mjP=y;32Cf2Gs-u~-ry`tsZ{V{8U?88s2{6EXSASLQ z^3fqw+O4*xs-$&W#9fI+@h7$Ov;0F_1y2Zbzgc>fTD~%$Q>$mBtRfV0NhH%P-I5oB zc2@)ReLaU^p+R9i^NU~BvHIR0r`Dh%*T>e)8WO`jd-#qqJ-MhC9aZGFLhe6(}>H!Qj$u|De5mX#pHBogf z6vJrPu~|npTB62Ysum6-^TrAo|4L224^C=}hi5=>;5>nS)F}QrD0Dufl5AHs#laAQ zGg0zv$}TYNrYg%Fhxg605lmsG%At6D(%NSBrRPfNTK~!BDDy31{+@n)Uqr2~xnmR8 zxz524Zrj#?Carplx9kjb{~uLv8CLbWbq`YtNJ%5zEE)u)ySqyoBn5 z@Elre)&oVq9Sf01P*U|chqIl_(DqnS)P%MRN=H_1Y&rF7Qwqt?IrROjHv;v)Fm~vY zc*$3Xl^WSLvrHb376I-8x?9bjdXf24Rw9=6VRMQ}@$ru{)V-$Pcf; zv&}I{R0jl>0vraNiK4UJsgdojxBZp$F1N@1AN^G?nCtJgu5{$tWE5sCj7!Hp#{v>+ z9K9h>LnP3%tOaS^Yc(SPymL6j^EKL$k8uy}0T0X^h99T>IO*|>q&MCzgA%CcO265= z)9EY!LH}$BIrrvt&0X99QkhoiO0&qPPF;-nNk}g<3yT{);*qs!^^X2FqLDeobaM@G zeRI1a6`K@|Q|#?(0lZ;=vj(Ppodzr6Qr#;oQeKzxgElxH!@DQaFCDFX9X|LB@=Whe z#PvL}<{5!J8Z}^!Rt&H^m#X0%=m4ZYLm!aYZ>d>tFwar+*~*;wj)(b|Nbq@rQE@oP zYzV}?bX%)%X73ZR7NBdP1=%mRts<5jom#Pn4D4>CP^s$_Ey^okH5bM1B-% z&WOgr4=nZ6Xg?z3U;M0Plq#u6eD*v%>a~dFed^8rO&V+qbe6jiE8R)w>+UqucO}On zUY0n&=_&eA@3*y&yTw}ya$e z`z^^y5RA&1i*}|< zw3((5t%UI)w8gfrZ)+-$+k5Ohhen)gf60XwGL@0ID11z+h#(whll{LJFa`Rr5S%3d8=ce}Bn;e1k5i z$xuJ*z&a?(PYmZ(08IjSt5Osf9O&S#^j^&l)Oq?8Y8@1x1SKjel!~YFidv=(WbJb`2co%s)cx7s`DDndZc>|zy20jI%dT)e!}b+8*hl; z{26kaPIgfcS{NTm`JGR#^Q?Z(xn{n`OrfU5@@JOdcc&)Ni1@iPk}21$T))J*Z+|ou zqnM>$Y|x*6%#dI>!WLGU_-_J#6IPRZ`tYAP))p165@4TTNKwdVN&K(%8zU+^>wdJ_+Y5`K{ouXsIGxe5ncI&KyYdjY;0qT`@t zDne2Cyyz0FBb`O)2swL!*5#FPi(rC}FNS^h^y=9Ux;TO{%eW0w5@HNu4g%<|u(HP3 zPEe;$!iS_XvygdOw)5g+KS`>eaY?WjmH;kFmH)NmJYJlzVX8x1%v#K&@Bmb`A+_Jv zg<0&+q<*mf>b+PBc!Mte(*7yrW83-cAarR&HhC?9>-v7rsKnbx8sks=}ZNLB5OhsNuVDYJxo&aZ6A#z;g-r(e-id$Mx^}&kNWzvW(;nQ z*DAW7<1&M&rcTg6fDM*t(h&OI0F_#H2dM8jo2`>35%<)VG?_03GsW*d% zp}YI)uS*H4-}58UjH%{%-|os*^ppRCpxl#wO5e+pbTiRo9?T(tqW(nC^Xqm91(-c) zD!H-YYFy8(+=PtmUYFx1M59-i?i+d!GlR9`XGBJhc>|Jdq5B+(E?hHoOzGsB$`-+e zV&=NkJK7AkOJR|89g>RH#|{h&)Y2_(Lo2$7z;cd=GmKR;3U@A2;$#bkl*-h(s2?DwjyKzOx#^3_L+ zN{mgn5-y^dRGgJ{`nx;@#Fh;cv6#=A|{U%8!#i>@DX< zaX|kH=HG5IK6~*h%FxB}VKPli^E>^27Bcq7QU@4&2E$aLedy(_Denmvln~1ab)X)@ zqUG1Ur9VZ4%V*=iR%gpH|M}bY9xw9>nkWyf7<%|Q%1OIe{L7t{W`odSrkS!Y5^0&5 z`(pSP?Tpie1lQxG*@&LNX@w`fIAvYXQa4=)fwaEK8<- zK57@qoen+9u}W#x^UNC#kd$fL$W=Ak)R(`yXCHEF!b)odeT6g_fW{t`5ex5C1;-l= zl^9*)?o+W9?Kf>@r{0)y;(x3Rq3my?5YTMc9JF$}2n4Mk$D&r2YsJr z*myi$zu{{$`)VqR=oW)Y?h+{YBXSF175EyFb*vSuHzeb179^3x#ZiBHeoUOC`bwBP z@uxW*FRxjG)(mN`BAueXoGajPxZ`+U5n)NE#L)Q)@jY2nf8B=ZH1RYg`XTmrQZLlp z@u#Uh(@04T9$U7%7T z|6?R(K1XB!sFTnOVhy{CKz6d-V9Adc%{zRxWQxBg5sjMkR(8$-CCZU&JJ>_OS^tX~ zh2M?&U#C@LU%4w~RYmQ4t)BO`dZ0iVdCR!VK2vf3<~m@Wm`I_+Bthj0cqyMjdoZz_L1mx3sue?vz z2QqauxVqVO>r)N}ber-D=oF~62A2~3`5o8SlQ*8N9|Nw=-%prLmh*Vo^QdE<2EL;d zh=SENTxqOh^P&X#oSa56KhSmPDt7!5BL&;Fc)X6xp<1|{T@-rw35t$+u;!_ zFrk1-4Ig3DlyJ8FP3$ov=Wm=)XiWZ9sc22~*-`hdry3}9R^;?XN3li!YM`3gYqEHd zLXpN3uuCKg+0EKTG+wTwG%^@>6s>QVj;Fe4pn@_-25~B+OilG8Kc=p1tcO8Tipm#S z))SSYe9oX(>%x0&zql>vR0$o%jMHpl<*_ygDXIf*fvgy> zuVc5eh157+4ee!_;En$h8eMn~N>D~Z?PEN80WqVHz*;-Y6$+33YNfx|Bj9-}?W@Mz zh1>}=RU|PS7JCA6{Of_-^(Sxo{rf?SJ__{exi@KL`gOps!*J1CMQT8|i=iiKjggSE z`%Cl*Z-cr(TB$q@{XCbTckhLWLs%IIqul_7#nf8f>zAD-)qu%J@umFI66nqp zC8wF$&FnN3I^Tzbc5&RUW5|j8XOG_BsOHM#*&35fk65AE(iLg$sh{V-9;);@iwfub za1!i#?@)SdMF7>gP;MLPEbD|AC*@17>6K_~{W3G?7y19~7k8Sz#L>eCtGxfL*Z}){ zh}jg*lZr(oWJC%UXu9~us4BX%0&8O~d(5DUV>DLvY9O(6WW6hRK5XX4#doqdd&0`l zLlZ0_xpmVi>mE2kfumf)${P%z_6CS=s1(Xx+ZQ!D3TQr!cR6T zN=Zgk0%OS5(j6R_eNljVGLgV&&nTD}VT9?ZBbXjX0BiY*_$TV)%nG?RNue;<3jP)N zTm^c7nERx8?5+g<4n59Q5_r+q$5f)oYt;Su91nI6FyCM6sk%hYK;>%!hcDa{3v96{S2hg!)@QJ_djpOPZAF|B7>= zR}f)Rp&o`x#`8jGMd?Y|n9hePLdN_u!P5eyp|w;f5SG?YHoP&zV*Li#lsta!oGh}_ z*kzrR-s`wn$hazt7Xa5gTY5xnoa=2d1vctYqrXQStjqrd20lZ6i{`c<)MVSN=RkjQzV^XplR<5oVqE$GEu@9tKURpkq_`3)MPX z0b0xkY~bt-*aLr@Xuq>fk72>S!UZaG)JQkMby9BEu41mMy`3wMv6jKA(6e9{ki0QT z9S63C{mj;LV08@KA@8&Y2s=NX5a(&$<2jtcbj@XNDVa0_le=`jP^MurI?qU+W&M`d zLW!KcCv;W8n}SQe#I+-d;R-G*u8`OZK(+Jj_??a>L2@Y!c$y|5R*OcX9JwJgMs+bL zdj9o*3r=EeW$l~pqcZ|*EcEv@wlCa1j(p?@oMy9_~7ENFeGWguMh_sQ<&!RhI$G)#YPgZB%|#_`gLP3^%9#HpgduE)QA2k7~{rgTekX_%9A;SF3ZF@ zCyFC1`KEl6uJsJ)pUXKWB6Gu`IKh38_>=jNC+=6!qF!nW{=FD-T_GSl_O1-~6N|RkyJ_j^{z!((Dz;gVWhKOq{hZ8Tt*6G0vpNL5-b%~DLU~bu zd>xZT&Ak=K;?}~OEy)4Nri0GJoFW7Rs99-My|I%{hQM)ca69_D!mk1{ zpQrEattNK$vI`eyyIm{1m(ZN+$dCQ++6xG9gBAH2wdK>n_3%dy#B z*8Vt(vz7o)nJ_6Rg1#4{ro%$GT!Z_|7eJ=b z{j3owxYGpY79GE$Ow=1CNeJ^VS!wgz%j4AN9c!_|`4PoR#h}_h9am)f^h)t!^W&QX z@4n)NZ%=kyn$Y7x)gKJ!R6a|HB-Ulg9?S{Drf^8^aM(@eLrIey{fLU>Q78im8_p45 zl1AHYV+pzkd?n4Z8942-2j-T{c`ZirC!J#Tt64{TA*BROL8dOmjYOb<)ieuGBt19@ zzgAOZX<=)+<3{3YI^*j*AYFkyU1hr3y@B*H!1t=h$&x=V?N>WUvHOL#H*4Y5vTCRK z(_St_aWwx(5MRe-Em~*>ptZ%{yM@hdKuKjEkZo%ac;PUv00*V#9~=In>PIC!?mY)! znWZ#^A;4iyRddAJUis)M&CFv{-XXfRoC71T={AAN>X>Ks*S&6_vX#?J1$JjY!SJJi zw9D7qBRBfI(?*9poG}6KP;;H{%fIRX60djJ?n%vdPEDXu9 zx=-W{%C8TQrTFhEJbCre;|vFvFR`Xf2fES-g&3zSkI)9HrCKtB^Ft!Zgm{d$3^C0{Fjtd!Cp)ann}Xg}6!Gjru!>fO7ad zmu+=&hMmvc2mk9&Ac_U{WW$36YzUC%jlj(0$*aL_hTSc7u zHzn&_+>cnXSoQ_5Nw{*y=qmk#&yWZi(=9is!RB^ve-t3L|Aq@8C!9TLUrbh-z5K@` zBt_}TEa~Yf*Bc{eJi4teK~J9MtnE9mZ5n}B0i(W;(2 zCM^H0jWP<0T6u4-n|$uLf0ZV)V7CJy41)cdnB}5Ms&0VsqreWsb;)7dMfQo_F$qY% z(d_|!(dqsGkmsxeSju#lY@4(|v*r_?M}{ZKYr%;IcrA*3QN5#Kc&?f^zWnnp!JjL- z-&68q@25LV0g?ZvL-YrK-s~=cagn0;w3$yZ`k_!<{%zRfB)TCaDyp2wLzEZJZu%w52Hcv#=(cpLdp@*BtVE}RGKNBUGNFiz(TWQ}`|9+yWjlfNVhx^z z36J^{^Zf5I-Kl)ZE|q(;e$Mqx)ka1^RU;LN*eKqc?0OnYl<)rJeai?sF$8Tv%sX_q z#%UWQCnCmA=FSv)c2&a?Qi;iVV!_~gyfk%Lce1EG72@kw2umdj>{1j7@e zX)esyPzihnuTQ7>ui!l08xmm|AEG=J@r7Jm1BrTTu>mhHSu}aD~PiJ;qPl>&(zzR74KSZ}>lHA{tD6 z?hT|m79mAOsLHr-_Mz1TfA>3JabWW3b28=8P|26#VY70Mc@L-bu`d;AXEu2;$puE;#4ZDck>W&7 z!b8qL|Gjdz(DzGcdm#2TK-VxJw0TcaEr!$Am!lIj~TEkg3+1YwzopM5RbhhbBH` zy*gEd2upSi+-9$|mDcw?nhn9=Y}E+BUV{8hQ4;)svmx(zY!i!Y6srTRinMC@9`siC zX0wcI7=MVxROA6jcOaY$ano|0i_T8X+eM^9Q2G7b^wP1?*oJwi^7yY!MqdM+YSwED zvaG64-#qeQ5G?ryFmS-#9&UV?`O$WUlomIim)Pk3bP^$_YGfdc69-1LJ%W1%k&9bA ztRlIG8*(d%umh5{Vvhy&6q5{7x~O0O}M+}f{)u@IT-Y?!JAjo>gxxa=hLzDMgZ=bHPm3L^Zv8xG^zEH&$7${TME_T3sp9f=`go52Wt8 zTTEao-T>4&dx4K%0Z>S(fdY#Ov}AT2N|z6Hm=N5VPnmWhb(A#$Xafta(#mNZ6G|0g z&%<>0$??VPFjk>pjVe#NgmfxH4I`X#u6PE-lj<|*%O9uV;;@@1tfE!eHZN2wvV-mI z_)4AxiMH$7giRn<*#$ZM2CkYG==yCfz=g!i&>G8;Ar#$hW6HO8-2u>T+&6T5cez(y z!e-`72p{W3r_OPp7ZqCIDqr>C`q5!3t-2JJ`vTrHOKhqhg*@js)p1T5mdWtfjbi#a3&RIM9w;`G^^T{-6K-Md^M%^y9 zR|&}goSrjYbII25|MCDzv|Bsr1HGw<`{CSUdwwWy8PYf9kCPIvcVRm;|RMTxPb29+* z*j&q+jStZJqM_JxXy$VXfJyzbyBAglpY^!XF2clbCFejh!`@*)WZHXJDx`xV!g5KPW_{+o03B!Yu=6~8JJw$){J*^|EGTF4A?8Z#lj9K|(%@NB%WR;<=Jw_o&l}NsS z4HfrsWP~W>Su#(sDa+Ss5K9cyz29@OY>sgCrpu}QY0P$`o5l46p1bg3KVQEl}v=aq1m zr!SH_M;jm;dy&KrWOaam-eTE-=p9^F4oTKfF8u9_lR7Yz<+=z2gf+z3`ZO9!GcFUQ zy1LoW2RE=7qOg9&SyNDH5%F1zjPwFP5KBvbCF7SO%Yn$zCtVjE$-0%KON%1{Vn;vO zN|}bN^vjb66Qw`C(udYIkXf)`F${1d)`goPIxjpHgI4z^(YN5b-61Ly#-aO;SP)`lLj*;3ip0ohIa7s*;=-8qW&+ zUa8l`wyxhm3!BnKed^#_oS-tgwMb z(@^+pP*aWe3O#?7XY4v8K9WYNQitG26t5p$L3emOj7qBV2cZ_1g$O58c`rb6kT#T^ z#c4cdtcf8S+bgh%b}6;@q^?Kw#z+wTCaB_%2n8B_U5t|Xr_LzoA&hmhk9IijNB!Yu zg+B%~Xb5R!fG+6+ftQ!k41m~})+6822+Lf%38Wd3pa~)x^N}|ZM&4t?)h-0;BF7*4 z#CL?oND7lCUw^0#|12N$Rvnn{!~Q&{s;@nuTQ6u~pn{GW{+L(?1_cVSoq)xd8=zk( z%YAp33X7FuQAH~$XE;*r^~tsgyH6#cCsj<2B$i@f-1|=s9zY*}nKV((adc_+}p$XEwY0~eF0=6EK zH}|G6bSspSWQI|_u7Qaaq(hzL*tKBDwjQ?86Qw>N!^`zaj3j>7QFDU7edP`;uccUU zpENy^nUPWK^Pyq4O1%?R7Oxu7Z?EO@_XhJ0$_~nXfh-0Mq0*ykdM!g!g-338&Aw=h z0SQEUAB>ap#UUxD?lZ{i7@9)SAv9B5EiDJl%31D{hMPv!IiJlSDOc>runIm3E5)cV z1Y(k1iTf|}0Lp*VKKynCeprA)8)i1|3X9?dQEHY5rgbAZ`veM*?4G`1{u&pvZ)UwC zOOn(J)Zp`5oE1zxEgneUj%)2MdWOEvbgc%>uyLrwWKxt1{Xh>mtr#8I_)w7W9au03 z{RK|ERRT0-ZfiJ2x;3j+Fe3XjEX_T|7l&Puyk2!gS}a{$wq-D+qSZ2|4lNeUMY}}VDek8Neri`kwtt9$UQ^$oDmQOdsRTl$j8yFx8L3V2+q`5G z1crM7#+;4Z#`{YDC>9gUhIFvWV%BRq^)W6#QDbQ^OzJ97^w3DImc(;U+Bd@+FYAAH zkxu`wo_wdCw8S)P;4;1}n>|<%O>7_Are*>`$KlS%pHjdj_x_`?t3wmjW*=M*Mt85w z*~UAROJR|riePK~d<9TKf0?%%R@!;4_wQoamqWQd1==%|{zLtifth+a&60H=#}OE-ey6a&wA!nVXqlQ7{x^5wJ9 zhFVSQ7`pTF)H1P!M-pcA%ZU<`kMDjE)#p9$AQR}j&B9F@=+C8zQWi@ zkfAF;R+PWN&>~te_zQ~{+TQ6STM`enf6OiJ3QFPRQVik9#1)HOMM9?7OSR82>a!Vy zjXKv=!>|9R?Y0E=_Jsl)z3ezCa&^95khsu^vt=be#G2CPIN0@7hqxTA3j^xlltf z1J8rDoYQx%K()6ac*wJq;r3Hx z%e&Y7C_NJ8zL3g9l$#F)$hkXu0SQKXR`baz(eLym_GMlpI|UJ z4>E7=4)Cii*UXhbG`eBcDUNr{0C>l@LU}4wGGL=AS~djkq>$WP9OBd~Y%&GqqK}sw z{%+kUStI9IU9GOz|T-a*N<3}Zwv0Y>TSw|ML)DrZ(rc^ocQEZJ@f<91jN zrt)zZH^x_~OdlQL0wPE78GnRN1u!*ODP#M<&?2Th)Ik_cMdHik0`!`5*X@8~)@~d7 zJ2@de&P$s3&-4Q@q^munyQg$+-e$A{|8~VvQqgy{?EBz!O=1DMJIWSn9^;OgH?U}{ zzP>$L>AAP|FmQ+_8{QK25B}sPl;=%;Dws!{2s;9Xv@-{Fz-5L>l^~R)+@f)DO*WDM zU7G!#IHH43zsaTb?Y7nf#tC=-a3*5Ppo{`snaWBtey6WXEB0`^jmaYu~&Wk(JH#d8Z@t=A&+F#Y5cDL>Scl( zIWHB#WM8o6bGBpM=PhIh)=4Q;NwN@&OHv>oOq_z>B^7y4ho& z)I~4>W&j{a_Cw&{zC2o~FO4M^wA9`y(Zf7hrcAdW_CxC9gRJje=angHSJH@Lj4 z0|cKf`=$Zli&L;U!SY>)(w={|D!fc(NsU9D)_^j@kOJQ}CrgrE`Kx z)`MF>A*!|J#1=h_Bp7N6NX7vX56)XR0{1u5_UzGpd?ZYj6uoov)T9^8IRU?N4}3oT zN>7r3is9|010Qd(d1c_#Q>D5t1~lApO<*-E$+`e0|F@Z^TO(v3BKeN0DM()~b_0cg zJE7~zy(Uc~L8XfL(bdG7a*yAA7f8?Lk2W_mi#P0SYP0 zGyZEKyhx=is_EuHCdsFlK(kwMGu1lM@MeipCgRsHjN9z()fXKfkPpgl*ck>tGN)|p zo`JFWcj@oeexUu`dVOA1*dvtL^4_dSC8N8Ul4y_CaoW6U`2#(8DVGNx?Qv@@0iR}@ z;qPR2Ro9lRthrrkP5BC(n4eScbvKH^ zrfm2R6J!E(hqUO>^aXy;Um15-t5KYcid%0`X4guG;8$*Lj8J|jNlc>w4mH(mU847h zbdV!BiGnzB0TPV?%mnh)xCv@uA`=`BaF9h^fI`)Y-k4k4AIvu`^7Kxpz&nc07P1b^ zpERSznsjr1TdKc)3F1AdM&6mBcI-HaKq5sFVetl)BFd!Dnj`IRY6bMYjMb@6u?R{7lr9+Va| zGPM~4k5$~|Yh~XH!KMD=g$^|*6Sm4O0jGWH#m&dR8uhPkqaM32?M8pkq8~d=Fz*Mq zdX857p=bJVRT<+)9(!DzOgA^cp1=s-LS&wf zu!CQnq4GJ4Ub$VNbU-)1 zH-u;z-a4gV1jOb+ew+AbCrNOj08)&(+t(z3U?NjRA8DO4WQll>b)r5K5g)c)0sB^) z&V7YOWI|36e2)f^_x|3%J4BQzwrd)wJjk9se>HmX(Y~7H@kd;J+O>tp`X=zRVQ1u> zeqr2`pzQs*^vr2WTJ#NFX#hZ}>xBG@Gp2`t`BW*Zbo4eJpNxZ8BA!`5|NViD*{_^G z`UbUuujM=&L@JCLUgTEc9&}zwHlj1>T~)fd&aZ*dz-0(&IelctWb2xvvn|wHpB{-o z#CWNnD`)=$7x0AmXdKTy&!O?&?})xKBIH9#Z7yp*P4<1rue z+8lb8Loed7mmc(S1uU-5nEfF5z=Uh|`62LBlSHOc{e<^DmZD6sNIhq98koE)^h)Ir zKr|%Me871X63{NRxtmE;Tdnp+gqGnP00JsWo(m_^ipAP5*V(dmGx?JgVs{yv4lc+8{Z@}yv0{}X?0w8`-h@5dxtD9kR z`TSHJQ7OfaEcwF`wrw~i;$CHE4HC!D(9Q5~&iQ$23b9!sZgSCgyb-r6i$U8m*j=#k zMpr8``Dh+|LGUZA#?*;h1?=^{^=rrs7%6IzIwkMWXhxf4$mP#4SrZg@DePL)cpA7X zKjq^M7{Gau7pvATL&UD@{`!A*lm&s%;d_(Y)-4BJJm$t=TEoH`d{;)@_d#UjM*^J_ zbmfq1p6EOHKv;q(L<&fe2vcUOyDye+1lq5EZ&yU5fof7g(K`sQCLI4%lYoD!$$!7j z{gOT|uzl$FQs`F-R7P;CMz2)MI$0^zDbbPxJy#64b@PFM!DG=1wS8t2to}%WZ-34H zNk3nsnIO2xrLYq^_e^^QS=hh9aRx6DYKVsZAqs&ANj{pyLR;qfgUwMC&KC{(=P41! zUui1A=JGqJo$P%@E(6;yD|h>8zBH>tQ~2lm!TkG;V0Y-g1ayI}ppv-gQ&a<0wz0kF zpwHhwxYh3r6Ks)Om|rcvZoid694qNM^L}5yj6485V%C=`q95 zH%6wZo}>X!-_z>`tdLBF=E~LEP78i;Py7@F7(-Kom238QZZpUEaF-^Vc6?UdQUct( zJnBP-h^J@jpM>~m-M{E4-kk5VQ!)}q@Ao`||GHls*@UanUG5mfu5%98F8}l!ILNk^ zcL`YUQ=7sQQ5ZZCoptU+|IMN&{U;IeBUEJOatVY7K98e!_|jPXDd8KiPDfQtZG`}g zFGcgS+_sDZ5}SkMiz`gr#!{j-8NWaAK)xK>`oj83N(@-8Yhv`}dz?l*Z)0nMZX^Op z#vnMv0s-#-@xZMktibdvPcB(1syO7eVhT$N{1itr`fz;mu#^JzJjzj~CS$P-R<6u` zVhZTTbd!ZC@AyBLn}ll_;}zO5!d_L}e|dbD&-OI@o7hw5Ux~>9Kx`NJjsEjm%4!JQ znoMzwUg^WP=d>{EL|~ZR?eOjTN5zd+Q$W?K<(0%$$mGZ_GAi~T8_1x7R=wklfwR`o zpqTAXa?PmFd^TOfhh*B+;)Ub)TOm+GdJ?9Lhfy;IrDT^^E!@}|2Lq1ZDok4Rz$FiV zBXadt{LafA0LIlaXe3c>1KVEqSf*bz%Olff8I>u3FC#bk)r?cgTkgSc?U49^MZdCq zgl^)=PaIlVA?_VbXE6{19C}tY)`La;YUwq7N+6#Ziq`qvc(k=^0RF5d83)_Ss5)Ry z@V`*s7AU->hEsjqMU7G67CVsFo?t_qL?H;3`RZqopedwL`R;6ften#7J8+>3J^Qr! z6B+zU*_pIQX(;(>fxDcVhwlR)f%j$pc8CNe$jdO;6$X0fA*}a}pOAjSw#+*Jbf^AO zbi=zi_N3pWJ8-`i-PA{Z{x}SmYk0{YSh`^+v01y0BbC>!nFA9^W?OU6ymzicfyS9n z-vBYZ?}oNd?zyBCUf_eo7lVFo;ZFCkceF;W8j;+gFMrf#Z#fXyLJA4209l@~H2pyr zS%D5-x5%oIGlNj16wT)wT_^8CI%sYIJbxi&C@Yn`;jC5#2f%(E7}(SI-~ZZX{)v;{ zv!GqE)6=kT80T%Igg6;zPKeCG3Q*36qGXZsDZxz)R?`j){m78XJCujz$~@4rPSyd) zM1F|HyzSjt`8)1Zdtd`sW}qn?8fE0EOs-@vvUh^ga>Krd^bHB`NDBN=rkw)xnTJ+k zZ&CWf?UB?Bgu~C8Bzc_?#2)e`MX!WDyA@9Bi4~0>KFV_X&rnAS_4bjbZ?m`d7=t@b0Bg=jo_+U`ilR5Xp{g zSFtW(R?VZZzD}rb?&HpxLH>E!HyZ&pb7)Jd=0JHJgw`l-u1kP8sbe#gP6K2AEQYfc zP$l)aEW1j;CKP1Ph!fN|*>d}yE8QVuz`ow-?sBeY$y`8sv}LKAnS78jh=dNbt`F`c zOa0s)Yg)$rcrM+ulDwObtH0H+ZGT@m9l=le^IgzM4X@%#Vbtn=-QCEGFA)W_|IVHL z)h|0ITg+$;J&Sj>l<2sZLN$2MYemkGo-fPaEO!M55q0?je||B%+&+vRxbKr=%WPGW z3j4ve!syI^xrH?_NRKe}orT4O?hEpfOg`2^M@cgrj|-zwq$&iwa5^xI zU76XQ_>s%=^L@nAMfznb9R7EndUDc3WP#lG-a8*}LEy``S zG4PYLju+(Ly$vwK_27Qo34E{hKB^p0u_KA<>oIpvmcxw(8Qnm2?A9a3V1%D6F+*2L z=ju9l)FSeZ2ErxC|O8|6ZHnb87_UsEyTU;l`w@lSo)zV5y=UOn*mDp@zGMyUt7%bSX0 zmcS$XP%K9zjpTs@3sFthFo%H#r+sNNP|EV|{ff!I4RX1ygfn7Rg@niYefR@86w@i-qX zcTJwTP=Y>7t+^{D#^ub>)Sx(_gADS}6JVvg^msli zJplP~4s4WucmjJ#WM{9T066DVEewY=jnk}tb$Iun16);9uiS|@mZku3LEXB%a5lI) z@lWFbzf!A#1p3W-(X^vmUSy)O=C|AVU!}LPTE>O~w<@nhJnlXL@9zwH#}4vS*P*T5 z;j?+Sk(mQfzCK=O^6hN{nxl0RR)BmwKOW7r?6ksIm?fjp5^%t_tV49_3L63r|MVZjh~YPlsEtvUwP$=Q zGdsZhsw>?E9Z7pOm{-^OsRC+OT@gY?z+Jhy6RTUigKFY_U-Hu#=iXOp<}nisd^4a& zOB%qPQAU1@S#@`}{N@MvT z%me_p1T+7uzR929e?~r^dL0!t9x+S90`sl9|01UH1DI)VfWOss&b2}L%@%-kpv#?z zf?h|tkTP<~h|u~ph$e0E^YjdqM7mK>5WAL=Ur#r=eIa1g8{%Ve><2XvzG2@v(SBFS zEc;k$D_3-INTYPS!`q|A4|>*Fv9Ae-v69&NuWIfYA9t;oXdBT zX+yHa{9Xc`F&*d~5<$*U1clogKoMG?KV$vm&E+)9>kX_IP^`!{Diz^&iww%CyqsaT zX;xyt)=Xlrd|T?0ZD1DW9pDEjwJ>D8-c)OPJ}jJ zl*;wt(PELLG@oK%MmymMO!@Qcoh zC;Wv3q_m8NV0?7HQyNfe!B(tX1?<=EL|x`=hBRK5sd>Ji7NQ?2-{+4JmM5d&WlMmR zUcSZfT}`P=RXi^Fd1!1HMg#a%!te3>eJx#vwr(nl#q>y&1jjuB0R6lF0;xL-Pncn0 zi?xaizhqgADCp20ey6w-jehJJ1MS#v=)%=JfO&`z4e{c_y%8iG?thZbKOwbsIE{Xa zdtn>iVddXUdi2B zH$f@5R{#UZ0$hC_(U|nrDnI=tPgS?BGik1+LbZ(ol8V}Dvo+v8BoGCk{c|#9MaZUi zdK=(SyvzCM{VpN30Fdt2SWQ>Q1Yfd)0^A{%e$5tm^Niz17QtxR!D$L;#KkOWAA@Sc zEoq30-iv@Z5!RhI|mT25TK_ z(6;HZ&(zpXopcYY(@TxogwfH^?}EGAH1UvK&Og}gaaUro3F3o|csyFKQPn<@2{<{0 zjfjgq+c4s%jgr`cLXkH`{94XHj0TXAG` znaEC%krQwIVkkw0v?gwJ;!cPi1`JBH7;;WzST%w1HoM{R-uoy55Jlh~EI|;MkEYHs zp%l_7Y=&cjbHTu_bG*@IMXEsN!~mz?_knBjmmNmsqcq_42}Mc{ zp(W>HZ^kOc6NK!JNr9eBG7j)kW`@%}Q{m_2^NDcs*%B^x0481Nxn#+Nio_On3oN$t zhi-gEdjDT$#rixZmS>fB*dMqq{?{Bkk|-S?gTqd9Jni5QIj-hJxXIxWrSJ zO|r6do-hfTf!@Stbo@x6T@%~PQ~aT$r&aNMXFCIIt9B@-5)Z=)d_~OBuQwfiiKgBeWP_gRLk=vt_MF9sI<;oKP zJyM2u{I1hp1MH+<>Fw70;>C$?uw9_=&dzP~olLdZOdtdGVjCVkAOV*IlJMhb=3|F$ z|8Yht9OQbBpmgyy-ix3K87+dyM9GsQU|xPfZhf439mWHX1C3Uk_YJnPWO$~kL1fOA zWQy|tjFADGpD3`u8aX5Hg2{%8wGX=b6gqzu*Wf7qy$&>V?l0Bj2u(}MXiP#{e~>V* zx?WvJPn#;l88u=i?K--PoNfNhXDp|43}+&F?KU`Sm)fn2kq;%064COf-bt93JE((t z$mJh-gZLm#6qNJE?#QmcC3s|}Qyph&R1RI-nFGE#TGYPzyvFe-GjF{YGvbXC^SZu| z1mA|*K_6y1Jy|+MiQqr9!Z6I&g&MFvzI!WC3P?;JAjv0S@2XKbSfQGrb2|4rTwE;1 z556xR)B^N-3oxN9fG#}46Plfqa6*pGul9qZ*m())1PM|qSs8-W)a zXqRy?2INYCFmv4>T9?XX8c66|SOPIc(56xVui2;t^VMF|x(Q{;6t>2wFIi$E4^* z8+r?N!7jH?UpZ);u0ZU>?U^&zybZln$lldA3&q#IStM>d=L^JI?O$#PJTA$&I3 z+w+WSEVnWDZ0Bkmt_NZ*zJ@-Lju2Q3_BWVfKNqGk|D+Vh1T5SYNOIc>fbPnJG*e z4;sh{KguW6 zKsWD%q_YbJxs$YBg$UNI3nS^V`v~cTx@P!c1M%*vU4H^EvjsW0K#T;khIV!^D{sXQ z>WL*7ZCb9Z^kWTp$EY3X5-_^;X~t8wL{M^#;OG+gi(}#Ca^M}|BGv>+ebVYB?@_q4 z97@;K#6#=WylH;xOKOkZ=laD0X1R)4oXkyOI$>A7<+n2*&ZR2^6}7ws$fE%Q-PK-i z1=I+~@VrVHY%3RqB?x-W2_rc@KMO%2%F7Mf0xW4X1gi(h2G-BBqu>4!II8ek8v-K) z9LXYE9y-!`k$+vOo8N4K0~1ElXa8koI+Luo7jHtaULF6@AMvfbhPnqXKsS0{%nl(@y z!@4Lilf=T(5aUk!rWhsdR2g2^kHbtym}GcBq%ToomiE-FU>0GA*(;azj3UP;ldi=- zRf$vw2;8Qz$r*sBs!1SnaZh8p=fBGR9t9cch|}8unj?>{<_Y?yF%-IuDf2??V`(5x z;f6SrMr=jML7{@YxGE`4&!XfHQDVAx)$yQ##b3Gosp9kGxIbXuoS}=La zf<}Jp?b+YsZAlkzZ!W65eJb-q7&w6_`2lKBam$dEs7BJqBwYBBEqosHzG(w2s#SjL zIXvMau-8Ip_Be@o`HkpExee7+nDVm@b-45H<7UGG5sq49mgtNN3<&6+E zN?#o+HN%GUT2z)IIopU42AVtq&7=NIY&ysIpX%-m{sj zQM(@KZTdCzXGLUgSkQi=S&-Y?yUP-=!(9`4_8PKaT$R$lY#kb;+Fv2Az>+Jwd`u z!$fR!WUacdF7VbK`GGyMh9@Os)3rNS^Y9mJ-qk>?0Asr)gIOPLcJ}zmm`Jxye=2jElACOz9vg7aPykB6sgc+`aZD7f7 z@Tl~ZoEwDe$m}>D*nj8xf(WMw^B5!`jz==mE`9@N0K1?t9|s``_3%bmpmcCXv^*-iL;Z%N7MCsm}n5XtQ& zyrGh>eqkqJES1Aov3}U*77*EhB?p5-2c6QeBO0cT+>V0bhz-84d0?BRJPm1r%Q-Wp zzH*OjoS=EXShMG$pmHE+rb_`O&qfa7`O$oex)gy*sj#l15u82F>&PKy69%{dZT%`Y z7QKZa`t?S~3$mnHGx#Rkl~X7Vfd6;`rdhwY?oxnfsYjhm8|zPmJj;L-g=kflS`4}z z0g2}-^X54E<`O8{BI5QRCP)2TtMVQ%u%-Z)X~`pU7(V6;46G)<2%{J6X-V!(ESMhI znTW;!^%O6A^EcM;*4jvLCv;KI%EXhy_5zLyub=U-pr{3gY_q=vfqr6A*Ard}9%|Qb zK6jwo;Yu%SlWe}V&vPiU-8a|V1h&A(cM|qf#3SP@ku6omg?5xY*9HK4>A9T9)}cZ5 zrI0EesHCf_8^F#mq-nsd%>C1bG2w*AxoP1J6QR-*#w;Sc6P6BHc^3d2yy-z62*Gfja_2by6R)TLD4tt}vk zi1V_5*K*Pb6q$m@m}S%O;Ei`G{GE_w*_%cW=mZ>(F`uJ+*FC3NBuNaIK%mT62oX;K z6ZG-nLeZ)<^eCuHT%A8Wvamf$wU8pei|E$cAM@_x)?@iJU~jLkBL!BNmV= zyiyLb3O;b(Hd%)=%LsctH@{0g;|#!&qNOLFk192XC*Iwwg3n=GN1rg9cngOlAans< zIQS1D0+IkMcHQ{`CUFHJopw);4c8>XKz6n-{-A$hLf`t72aJmd( z=LB^{MB2EBmKAJNL@DX=@UqHF!ODW-hh^M>@D6-$Pha4mumO&lR75wJj=pK-5p<(9 z!MQKKo2}@k%Maj=8RM!wO2olM$``O}X23YD;$8PRAS_!==pFu*_?<$FDnDZ&=GXxG zl=;DYyiTabcE}dlN$A3%2-4mnw1w=lF;>p}CGm6s#!4RnVe8;#fbmMtwFO`nP*24x zY79}5`W}K)LP2zXVO4q8|69C(sj}Wtqm&SI9%SARija8<9wX}gL4GSqPJoHCI8WUp zRIVr_^QP0+aVus5pA`d_&|;t?%b6nW6%PadRrr1eE3Y-g?eG2FHd0L|!&9Ty#ovrJX0F{s3oh z@$tvc#A`S5jdC9!hs1iRJ(-tWb~);~T{t_To0k;o+vM6pl-w=f&b|HKEJ%7?YwRAR zk;D2yEU?b~{H;!eF0k{*;@F3+wpX9uJ-z@ilc>2xGny5d4EFtNNxq>m_{8kNr`S_Q zjhtN%g=0KmAqYZFK__+kBX*Nnp(|=T+gpg_q5sy`!x&(V9niMRhwVZk+gh{MNU34* zg`Ws8d+!>-r^KMWcTJAeb(HzZmhVI}t;46Z05H*L{t27!FLo+CEbBIU4<}%1#HYa` zgp#}cPDQ%)#Ghlgw^lC7eplHBD1nH-)7L09pRl0u(j*l(S{HzMbx>mcmAt!2e80$B zMsiHe7Hu7bxe1suN@i@hMnL3rAQhD(GYAxVs^FJVVJ{3GV-n2}x$@~FM~qObM~O&FsY`Es zkMR_OX5Q*)4G7~gkk7S&shUg0P3@r_4hnsg>$^8Vz&JoC(Vg!|t9h{41Ae`10STB* zWqEhi)IA+QoK4lA%B(aQH4OKQ1h5WL!>SgA#_jhUrthmD5BC6V(&T0p$3v; z>qoSv($#?K0J9k3$)|g_fhrb4<;ZuhL+a_3{z_d|D9@Vp3M)WP?k^3qXL;JU5_yWrH<8OeTj!$NIc}+@7(qWzqs^P-@dJe1G$4{A(Q!nNTs_A)jk&cW*n)xqA_T${_Lc#a-)qdp=#;7 zPnnIYB4s};-V21wOMm)+comLX3z(SLk(M+wh?76<0&xY4g<&q_%*zI42EC z*JGlfr8U8J&Ik4bKuGz<$oj3hG74MCKlpA=eTYd0%V);og4FxDCStuUx0#t`eJ>#E z3>o`y$ zCV9tPO3y*sI|+ny(YBAl?I0T$q%w(tH(tq_O~DE_K8kZ9)7DymKp_)qkWay zo-N=CVoNJ(49>UU&aF<1gOLB(Nrm?01&Y!l8-!D{L?uVoU;C!~)PDEDQ$|?WrtCeA zha3>duaw)vT=D)_?|?Uxp~IXVLVBoVU>`eEeuYY%i855a=u07O1e_tT8ch`(imfj= zs$DyDt8d>5sJIW_3spEWv6*Uu832AvMo`=JACslnf$h`~EYsqJq$ULz0z zLl`8S{yZw4U*hmt8GhJkA35c-EjOKCm9S;NP`2aWy83eN7hG*QiCID9W{%Qxv zgr2>idk7IQLe3(0!t#l7h0@-gm}9cr+P@R56cM3kTbVp(x|;e~r-csrH3rj?K1U~n z3R+fG{+=)gNbcP329@R#q%UsDMF3avQ1)_!dPVw&s;p z45l`ul+*vRBFvX_c5$%sJT_UvxOY-jYe>;mTfa@`6i@u*FMaO|5oD#5PCY*z!w|_T zPue3}6L%u-o#@@?v#OEgg1`tDmAwkg0SiHw+t5;3Z^N%o)HR#K(C*iDE2Is!l0)9(l4yW54(arjw9*)S0%qb@=G0F z`{Djt0|^cJMFza+2(P>~!Q2WRY{20L@{l%xbRPV^M#43yRfsxSpndhYsE&V(;SBpd zHP^OWWK3cgaF>|Q)_m)ZEo!dKu^x`|)3XF=0{A@eDrNfgQ4NU*;`d=Efbvm&`{<_>4lmx}83+bpSgNNzrh3_r8`IrO&wt^$x{E2VRwJoM}e zP9RLVYqgJIDEr>tEV+SX<)NpyU-ukF`*JygW zCpQBGgVQ*NZ!}*mGtP(l&s=DxS8Fz!SKd77gzR&B%rooW)vliJ^GLw-Qg@czNpG#G zxjMf5N}CBw7$By7zseVEUWbaOuK_&xpsUWM480CF%7It)<~yn82f5NC75x1iW$!Um zuH0#!zmBSxHJ3AYw_A)sn0D29e|;B<5>lx~6TFX@>{GfmE)J}>9a0bVNKb-mShL{J zsdI?B<;O4<#BO!iP+ z>U=^gh;fhjaZKl_zCf`_O$IAV{AIPG{Mn*ZsqYlOYn|@Rf1>k%Zk{aM@v?QzY711M zxoY{wf^=3}q#)IPvZoO+>2eacr)qtrl9>4War@UGsN82ab(d;_e*#$pdz^#QTMdsn zK2K^t5On1A;pV}Rl8l&oKC+`{EFsv$n%SaBz1h+{DHxmQLOiO^t=m8`kQyWUa2NLZ zqh+XSyp@foI)BChKyaKe2A6@jBv-gN2_;aWMcbo2%casDa4Y*quJW|xZb+R4 zpjT@9@jx^%cbM+vd%vQwv+L6>$C^aW(h6mK`-a;^ZK^e_Xe#^B$UzL`heA4rMN0Z+ znol%{r7Z$9tz$r#t*+@L-V}#GL+u)QE?er(_NLJww(B@Fc3DpF?%XQOpr4O(u*)umB83r{hOXGPUW=P;4|KlEuhVwh z!_L4qUd{;*Dm=!j_64BBjiK6aTJEdWaiO~s2b`mY9klRJ-5(-#dhh26BM@kMDu{dI zD#U)p^jc1w)S9YgSOU|m#pj?I)Oe6Rx^ZfQe9XA* zyMaSwnsiM6ItRVA`Kt+5QVyAb^Tv17?mhHe+8|rw^UB}YA|iW+1b_}sidRykQ|ujc zAFw&HF`(QNsA~B!?57Tv5z-ea*AUUYT8ifpeFi_rT8hw7#^Fb$Fyy}fgV;>)&#P;d zVD`7EulE5xD^n$s$>3UJG5}`65O+85w``uL zyRYV$fc`DlD40;wFUWThRIfvg$n@*5;A=Awz)3ehZybd^mB;j$+dP&4)oMUdbE~cS z1_ujZS{nNp(?3NQjVx8+nC(7deU1ev05rhjyW(vRqZcx8XLDuX<%kAp%ybHVU>qdj zZ)a9-=z;{Lm$!5y5p2{8IF4~Q^+h9Lrt9QQ5hQU9hxF*AZqQA{oX2^f^jua3!iIhB zVKxE5FTwvcpmie?Oc-ze2rIC8=N!0BY#;D(tQf}@N6Xj*?BA}!r&>c4a2|VS@zame zaCf-f{;=WK1X}&LAWBnBIPwer@sD5&z_L&@Blo@F(^z|qNQUh;wo5a;6$z*Ciy|9s zrA4?S69g+0dv(cTEb`gObF{2hQ=MuA<$wJCWt5lb2{M;57@dorRqNA<;AdMsK0Hkn z9H@?>j_2uGXxW&GtcZqQOr5E(0K_lqNqfv>e!lSPLX*?cYxG^HD$YmQ;p$~|NE^qY zJ9aAkg6{m8A`g;@ zb_JY}vw893Pw-qX4x==G+I?_2uQLK`C<{Gjv$T?oOigR%MbKB~6$8H+^p#wD@@6}2 z(^CU6ZGgq0zdz`CQFMqCP%+^2YkTDgh7FJjBe7FMM{ci;ki0ancrz3ldLnVC4&rRF z(jhFm=Aq`DyY514FuDIzMep|JFCD#krE$pUxKL{DgBxmP^pq1Cx>hgQqtfJjLrfAr ztrBb;-)h`@l)Jx=u%-FsgT`uFBqp3?W}SUN`ZwUKoH1eTh9l$Scijcp5X+?+gumkp z-i*l@jy|@bLQ_ZtN@AJuJ7UW}Q)kTZ|$X^Z%R+6Y^M>+@)($#6eS)-A9KdXfh zp&SFl!kN!nSXon7G#7a4!TM|GZuPPp9GlXEwQludJr&>8`P^K4u`8eCaTS7k%@3Jc zAT5!hjUFVpPk&-X+L!7dE#Cj2EE)sF@D)`0W61hcH!kA+U8ynZ&Vh;h;0Aapw&9t} zCU3`LfAlLRiw6BDZk7I7`Pr6f9L5A%{3>?rZf=(FS+dLj&gKEe&9f!r0Y4m3KMcAj zwN>oTocXeh)C+>lRn=Rh`1k2FHaCU<8}%0+VU`<+F)q7WkjL9_gh*QMRx#5O^yPnd z$Dx7^50LLiB{=RZooIZlVVN5Ky~M)JB>kGw4__I?KY33y3uHOw&O&C<*00vL6GsX^ zzQg>D)-uk`PjQ!Xon!#fBfkQ?+0(?c7?2*gm7!DL?o0{|y+&4KeQYzDzYqZpUtgL5 z20V5fsUwcGm-N{p?m63aiGGmIqK%>zB~avZwEbvcqAAQ7g3}UKmKbv zjNLpVyJ>dwRj+~hNI0^rGZUt&8qqpZnh#eJkKiw-CvAvc^|7|V_?`j1w9n=>)d^ADQB0z1ot4T7u$v@Y- zH%^+FmQdTWA6B6NVIi2sz*N~tXOaM4+hh1oX$PV#obJ)#v%3%3&v!VP{JLv${L{*S z>$}7EML%3pR#~B+tC>Gv#JGc(W-lYy$yC=n_k1O*P5n!q{}XqdEO^GoAG|TYIT3e} zKhX#}J}REcDPwzKR$*rT7Hr)X6SZr!;-MFMQ`D5gv&vn5Jj*K9Ow@vX?DK(b=ZMA?6SH#PA7_&fH9IPq^3HigV{z^OO{nAGUIXuBOPPQ%DRbN23yv z7%(VjJJ&=F?rE`dgdf;FLsuyLgMz`R!}ZeS)+YT#Ih&dco0S8h5g!GP)ZST2@zz@J z)X=s;to_^17Q#5<9kK-Rh7rCMp(ySsD0PxiICp$d{ zKdS*BF#!WZV!=Y~#K)T2S^5EDw+2K|E~{5u6XKS`j;!b1r`mIGW&NJ+P()S=^qH4k z{>b#D(&zO&`RdZZM@TGueLkUhuy3`L-@ke0Zz@94S}7?{i6eMrsLUhIj@1J@)(p{S zIpM%qnrbi*XAYJt>e0PMVt$|A9fZV~15}&uz;u|BsOdSGJ!UpU12uCh6jAsHOAZ^c z_!)<(=Am+K*3{AT1cFs7L*Pc8iIO|v&r{gLD-tzz#QZnP?}I{x#eJphePI`Qx$QA0 zxb|tYiZXhgSgM~7*F!#hi4!YU;-5IFWR(~Qp`^xV@6L}_-p)KPh>q!P88-ZTMgv@_ zbSD#?r7>N6{f)^|uiap;El0PFmolPUE?2ERuh%BI{wPp?QdR%bX?7?UZHtNLJWGVF zo1f7LgaeoGy+}Q}E?j%Dci;=bB~S#_L}aOWe)X3#O;k15ZU7+ux*^Z2#{jzMazc;6 zIFNZB>-Yu2Z05g|9P;)ZMll52HRGC&hVgzLkqZ-n@ppVrMBV1ESLt#AkMf+8=zVnR zHVoK0g)=Nf)B)rdCc)ob%kf3qv3{Fp&%KP?-GC4yD;)wFiOWajWR_ZW^D$UVEJ)NN z{g#+Jui8qMAEY4@&x!f&r0H-FL_^qu9#u9H5HOJ<3zP!nr#O-$&lh~gdOSe-Cn6mG zL9QlfWM-+?WLfVHY~yW&q5oGq)H@U?B6XqNejR2WU8Q5Z8oIM{zKkwf#%F2zHIR=Q zKuXDS{*uy8OsBg-i?r{{O)bVRKrK$OrYiOhp#gy`J*BfEwE5ebgv--l^l0bWQCwk= zOTNi4g;-NO;xgd9)G&Xz%Mh4dyU+7|1&XFsrNHFHYEAj-MKotZW2^5)vW=b}8oj4w z_H@tPv!qJlzvGofaMe%@`~wIX=#e^53xeG88kKD>6scFT8Pz)^%J^BU4?LOc^YPnF zOnmen88};|GL}W}l<55+N5P9~c_j1*F1`tGg%%{q3K;sck+TN zE{S4QLoS7{B)4CoJAjBJXBmG8$s4mWSwz_BE6^MRJGITM2;UjF_HU5cUC6UuK10>F z4eh!a;>+GtDQ@tO>2Xj@Bjpy4=h1BKMEM7=7&Z4|Jo0WBp^zM0NHhZOy3<|tpI)s3 zSs{*dqU0L{Yf`RL(+OY*TAH6K6SGQH!D`}|?9M5`WLmGAPC0aPm=5g%-)6cqhi4F0 z-?{gel-@HjkK1>0EY|d0SR8OtIvRBL>ZhaF5`y=$E?@%%1^HX91@EoZ3wk7Bu7c%6 zxk<7&%#zz`KK814ON;~j@Tpn-VwB>d|6m=PSmJ*mbfhgR12h~mGYu> zCa-31>N_`z;D<(?7@&!#J#y8>y;cpB6HDMIiwCN(VOn|SmCA6Xa;Vx?NH?UG)%fvY z%%{%B-V%8S-itM1Q%-h1AvoG3#gY$DZ7rm-=;~gx(Efd21zZh}Fh? zd8}rcR??;(gQz~tQi_jgDnvRy^w}>Wg-WhTvBE`3jXf9F?#Vk#nlQ$0m#v$AH^;Ji zD4%LLk}1AYm`h3bU~y-;hf{SkMtl);n&NJ<>m&unO#F|mSC9YYB6uF<9#sGU9b2RO z#1P$U9Z>h#!%p`>C}KBNgMOJfW+#b#!jN@wSEmqUxdlA_IEWRGv{TNgL5Yx4$mZrR zKG~jPTGEh19xqq$W?Z8b<^``v62_keN-^~+-E}|8wiSAeBrMEEI_ciD7DMnhsZoE^ z>m7~GP^s~|B#*kYd*JiGVXq%V-~Ig6%H^M!s7rAL27Wt}-1WV6%yH@3)xq*^UFNw6 zIB>?j$TXRYaj-<zxhm|J**t}hwqviJJn8~ZKQ*7t5;UO6T9PB zNc(e~kYCKH{#hXks@W^kLRnXBcgnNssw(jS z$aZQVWSl>I#yJC8S>&x=qU0ng)p0zmkc@vh1)qoTj4=Ja5=5S2lAJX^#fuW#gkeqs zV{@Fot5WKvIMn_kk>uh=Q^ud~1wCko}j_r8Ls zJYFZGU&QZ9{ksZul>bH4d=(CwX;>JpAE4^`a2N@HU@bd;8CsmOVZ6$P{*r4#Gh;?9 zN*i$`wlGY`Tsv7Z{g?HRkd)lN35fs|bdZ)R3mwgWXrjk(8+rkKHHoq*;E}F)Yw=pB z0(~oEaTC;LdqiA5+C!A;x1<*NosIUT_(Dkkg00~yj9zZa&aZMzxheeuQ4>LV$uN61 zU3~c3ceAml#oH)Gap!&x#5M+GQ44^EOlULC%UDc+Tzzhj$E!b=qONzE@TA3p@?N%M zAq=LdxpxbEwoU4%VAY{RM}|_N!+;RrRwG<7*@*~Je1F?WD#ytw@OHdu%GeBYBo2e_ zC%qN}a5fo(4PxRXCzkcj`UJfX(8w#ng!azwo-I>e{W7$KXdy{M^WUngdBT`ZU;BQT z<}dXOBBe6Hv-Y+{x`9UUr>Wy69{A4&quhXqfP#Yajp#ud9u|g=BTowDE>&Dg&D2gY z!I68uG%G7)j2ov{KY@(gP-(C&er<#ha=b!gW4iA4oV>}Ddnv|EYZ!bbor2V2GR@ln+%%*Y= zOkv4u|MVS#B+-+k-+EoA8X`asP~q&*Z`PMzpm4g!a#;NEtr!?Xka+ga?FTM;GR28V z)`@1{<4vMxC3ffN!a=-u`Zmy4%g{`)lLPjRScEcxZ3fZ0bSatqKk&$`oFPLh zUb16q_pHG9xc9;mkd{)uuy|X(2?H3;ya5}CZ|g)nN1DM8lR)BAelHfehuM3;8yIT` zcDRQ|6ty4n_>BuOa;e2vzjVfHmq~QQH&BpfpTG~*{!N(kXsZaS2e)-Gu7{g*v)^19 z(IQ*~0<@z?O)Hxb{Qo$pq)O?c#~J7}+9K%jndxR09BS0xF-7cv`r|X$dThY4H*JXY z)P(|5(HvgA)_F;XSjl~hcm=lk_QF}vcomH=pLp2@Rs+QOA~^}bX|4SU?>iNLn1OHG z60=|0*IYv(Bqv!SJMda1_q`7E@S62 z^*D7z$kn}Y>F+7iK@WR^ivLVauy%v1$3}*@w9mi$Ai9*JR$2h0=hPSzH+Ue3Uqub! z2P>M&DJFs8nZ ze|z#(b50m`6b+f++@F~vaWee>!Y)*y^3ZNH4!OHm3yg{%h;I+J5itLH;H_~;G?5x)kX*z{@F+1~)OIMhC3_LHG2r!IKP158i$5@=B9 zz9@x#s%B4%yLEZ^9>=o|yJtdFd~m+yE9ZBaa6&dM{deJ4At z=m7Eb#-Hjx^U~?WI$jDgfV~%j+JYl=z++&YpKho#g&M8NwC-EnMm#PdQV` zGo}Old?R^B2hT@wX&`NYuptB@p5w1a!nusEK7rlm3+QV(<`q5t5!1OpppDNV_S2-t zk1GFTG^-6qex14V56uLRAcos@d0Hfpo_KO>9b4GuR0IkSkbfLFI+$LnAOstK0VDv1^qb>Mcb3A9vxr6%>?9u;Q1a zZ250}u&7%kFQWjhL*sy9VsS%a&=%H_XAwjqw_2yhZUeig#?;&46*&B6#mkZkDq#(ae)vmHR=kZK}6c+e!)LQ<1LYM$B zGR-tewaT%yAxx0B?t?3k`K&|;SOJKN-k8>Xi($?o*K8!qBE?qv^HGH}94Npb{d3Uo=os6Xx+MHpu0dW~IDKn8Xfm5iH7ZY`YIfZD0eno)*y&QH z+K0WtOumhpla1dXs9@~G6+uv6{=omanRs-z>m@tf!CqUFQz!mD0~RS1xcn(g-8~NL zzVN@vUn-|mQ{DhArVhC&n#q!SY0HoF^%npfCwlVX=pP!pw6rZ0r%_AJ}B25Zb*%-#wm*{r5W*!a~R+Hvobkla{nC zH?ya*JRuR`C1IfckBfeXmRWA_<#zONnvpdec?feSX6^um>h{*|gl&luKy=w=@Xa50 zDnZk|hxv9a$ONG!g(g(krcm`}q5OeUP%tYF`OXTYf7Fe|{Jm72azUR=O4-aBFZ&O z4{j{SPr8e%X1bg2NF?qda~&%t)!~jjFqA)s`e}j2_SA62TA^m{O>}~bio_tioy?w^ zCnbT<*>ApVs?}i7!eC?(Fi8v?C*$6|hJ$UYkYEJRZT~*=Igo%>$ea0ZwSgv;ml|oH zx5UBHnCYf9SecaR#ajNr-<&3vTc<}4mRLuD{#t5r<66T;^knSBOiFY;oN2|RP^vc3Wu zTGlqlN(M&yVr5~Ho$fHP3NJUmnbm;r!tgm{qg>HlQC!SRSIVp-*2M^npddl-8mghZ za~25DFe<$gltIjIdHn2cGs`E?qbtw*t~Uy@|J+G3K|AnfJ6paEk{yK2=*3EguQQ;Z zr^TO;X!&~xC41(bo$ZZM2x+J=<0|x3hP$RPk!g>kx`59!?x zEiC8JX`ErCrP-mXrK^%YOPLAM+`>)yP#&=ecyhY*;m_5*Wv85E2eCG@&9RQ3YtugI zcI{x&5h>U=#F2ozd2oU7;6w~g0qT78?f2HQ2xqgG`TqOwY)S-1k|U2>icUNBgtZCb zFNi=3+KHexPVA}Ljqv<*FtHZ844%oY8JS^f>D?SrF9FZ|+abI7$OuVT4-*5ag$yS^ zF+4-poaFj8#>_w47%SAEoDTESW7)JpQ>CseBLgNxxF`FZ7k0DA&+!nX;YNZ(c}A z9&uRyv%&_DJq73m=O>nu_wVs&0KEC5>GWUrN|}HJ7J_eGi0FEWK`S6V1P`VdB*oM; zU~;rr2B2}J5UOb!FUEuk$?vORfbFbMtXdl-b6Qn}uEs3L)xLMNsBzje0?oTkF?%=b zpWj1OhT&S=r&7EeiBUSa!7F;$P=7M2J9WU%aKQLl=)|B!S^)s%)fc+Y=t!k&-vmj! z4D5ljbV&eT=X&_A4ON0QO4(*^1~CBvzupL`UI+&N;lO4-%YpYN%x29aY^FnuSbET8 z8rb#9uw_Lw^6%@Khd__K^4N$y0@C$N07UOb=!g~&Ic49R$vyroj?C@uMZ9(86#9-AeighR@=WAfe&?|G9-K6&UWvlBx?@?bK6<$bTRR?Lre|ozEjPS zKrqv3fW0EfMbfNk0cxxNewz0FsG#vL+(mrx)de+Ef4tI$n>)ZSUT}~Cx1=$Jtcojg z8SIaw^3!o-xUXX*HGFg&*5MV>nn5FE0PR6besdP~oJ%r)e!NEU(IL%yfV43^h4{q- z2kGE=mBdAg3G>|aKQ#~w3U{4n3itjSL2(WJ!khA%Z1eBE8(NPh#Fk>;`W7FT@z!;z zm=h1+dkE!*!3Z0dZ=}MW>;He;`g$S+F3o5O=Q)!GX@mok-qlh%2?yeC{OHp|mePVz z!9Dlni!tC&9D0jg4U<5GwQGUt{?ToRRUzsW9mAQrlyQFXYcfABVsj7LtSq?`rLI3K zy3$b>?+5Qu{|Qnyn7KaQT^=Gpy$fcM1a}|OEZcqQ$c%ky_HRk^KRzJ*5v@E7d7^)m zXy6LVK1RtoeKQY+u{ml`;12~5JSv3AI&qMTTL1%H)%h{SI3CoA=C`PcBQ<;MeGp%@ zh}6^&nD|?=_mK*h8`&uD9d|ccJ|i3`P--v-rznZC^s&cbe0T@U0y#nK#bwpjIbK8>TNIlZ|mI#=MwRQ6XbGhX-LzVT64gkXz1Wj2Z#GOQY67K z<#1p)w=3AlqZZF2SJahsmF$1HqA36WO6FIx4>8|bjxK`oQF#)&IuXPAZ0Ui+M;q@F zlU?mq^gwyNMhv!4;`}kR13`NN`T*3Hehicu0Fz@K2I0OE8sK%>Q)RPhnUD?&^N%p5 z1J)vvbix8&KEZ@Q@X$Hf)8RT`xnBTi6a-3P(EZi~*HP3G2f|#1#iYpJl){Om6#J4w zy?^54fBqsWVb_Ik?vO0))s7-qW#ZOJ#hrHOoGwM)cBn4U4%UZlq3DJQT*iF2B#$5- z2@oS=1}nT~@8~{UgLxP-M#MJtn{5Zt=nP_*p8y@eRcC(sIyUwNa3w(P4n9o=SAR5o0CkX?h+Q-kPJocag^cDJ zZ;V#t^SqF6VV>5y1R$pcB3=ev>>5f_MP!Uv#Z6xm0LSeC=gOI@uU`iov7bA{YV zs2=`Kt3rBExwMn7bl-v*!bmR4k2^Qa#tDL^NfReP>k*0o`~AoTremmG^ajsa|Hqs5 z?{D7aBxnc(FQjP$2t#VHk=NF^tR@I)v4VdZvE&<-V`WOgr)$I!u%rsDitL?e9wUF= znftRiC{PaKu)~fVkShCwG~nn>0BsZ*B9rsi;gg{Sho@tnut#VAH2von4?&muC>Rs- z2x(Y`_Hs0xu$dxs1ZIKrtP>_G;Y&wcbNPlIrduJ4#lC<+h98TUpXtrN<}?b-)~h&U zt1XnnaDUKWNJoXh4XX4k`b?GR)ETXi6 zs7iXUw(}zSz0BGWJ#{HT#;4G8?CvkHfi?q-ZH0EqeOhfBqpX1qci**htFVe^>q;C;snWV^|?(-=p9P`tT>+&h{mz|21&_ h`_uma{K(eMaxzyHw0iqIB7py?DQTb2Q?R=Ge*ns_1nd9+ literal 0 HcmV?d00001 diff --git a/_site/site/2017/04/27/VQA-Visual-Question-Answering.html b/_site/site/2017/04/27/VQA-Visual-Question-Answering.html new file mode 100644 index 00000000..36f3c864 --- /dev/null +++ b/_site/site/2017/04/27/VQA-Visual-Question-Answering.html @@ -0,0 +1,106 @@ +

Problem Statement

+ + + +

VQA Challenge and Workshop

+ +
    +
  • The authors organise an annual challenge and workshop to discuss the state-of-the-art methods and best practices in this domain.
  • +
  • Interestingly, the second version is starting on 27th April 2017 (today).
  • +
+ +

Benefits over tasks like image captioning:

+ +
    +
  • Simple, n-gram statistics based methods are not sufficient.
  • +
  • Requires the system to blend in different aspects of knowledge - object detection, activity recognition, commonsense reasoning etc.
  • +
  • Since only short answers are expected, evaluation is easier.
  • +
+ +

Dataset

+ +
    +
  • Created a new dataset of 50000 realistic, abstract images.
  • +
  • Used AMT to crowdsource the task of collecting questions and answers for MS COCO dataset (>200K images) and abstract images.
  • +
  • Three questions per image and ten answers per question (along with their confidence) were collected.
  • +
  • The entire dataset contains over 760K questions and 10M answers.
  • +
  • The authors also performed an exhaustive analysis of the dataset to establish its diversity and to explore how the content of these question-answers differ from that of standard image captioning datasets.
  • +
+ +

Highlights of data collection methodology

+ +
    +
  • Emphasis on questions that require an image, and not just common sense, to be answered correctly.
  • +
  • Workers were shown previous questions when writing new questions to increase diversity.
  • +
  • Answers collected from multiple users to account for discrepancies in answers by humans.
  • +
  • Two modalities supported: +
      +
    • Open-ended - produce the answer
    • +
    • multiple-choice - select from a set of options provided (18 options comprising of popular, plausible, random and ofc correct answer)
    • +
    +
  • +
+ +

Highlights from data analysis

+ +
    +
  • Most questions range from four to ten words while answers range from one to three words.
  • +
  • Around 40% questions are “yes/no” questions.
  • +
  • Significant (>80%) inter-human agreement for answers.
  • +
  • The authors performed a study where human evaluators were asked to answer the questions without looking at the images.
  • +
  • Further, they performed a study where evaluators were asked to label if a question could be answered using common sense and what was the youngest age group, they felt, could answer the question.
  • +
  • The idea was to establish that a sufficient number of questions in the dataset required more than just common sense to answer.
  • +
+ +

Baseline Models

+ +
    +
  • random selection
  • +
  • prior (“yes”) - always answer as yes.
  • +
  • per Q-type prior - pick the most popular answer per question type.
  • +
  • nearest neighbor - find the k nearest neighbors for the given (image, question) pair.
  • +
+ +

Methods

+ +
    +
  • +

    2-channel model (using vision and language models) followed by softmax over (K = 1000) most frequent answers.

    +
  • +
  • Image Channel +
      +
    • I - Used last hidden layer of VGGNet to obtain 4096-dim image embedding.
    • +
    • norm I - : l2 normalized version of I.
    • +
    +
  • +
  • Question Channel +
      +
    • BoW Q - Bag-of-Words representation for the questions using the top 1000 words plus the top 1- first, second and third words of the questions.
    • +
    • LSTM Q - Each word is encoded into 300-dim vectors using fully connected + tanh non-linearity. These embeddings are fed to an LSTM to obtain 1024d-dim embedding.
    • +
    • Deeper LSTM Q - Same as LSTM Q but uses two hidden layers to obtain 2048-dim embedding.
    • +
    +
  • +
  • Multi-Layer Perceptron (MLP) - Combine image and question embeddings to obtain a single embedding. +
      +
    • BoW Q + I method - concatenate BoW Q and I embeddings.
    • +
    • LSTM Q + I, deeper LSTM Q + norm I methods - image embedding transformed to 1024-dim using a FC layer and tanh non-linearity followed by element-wise multiplication of image and question vectors.
    • +
    +
  • +
  • Pass combined embedding to an MLP - FC neural network with 2 hidden layers (1000 neurons and 0.5 dropout) with tanh, followed by softmax.
  • +
  • Cross-entropy loss with VGGNet parameters frozen.
  • +
+ +

Results

+ +
    +
  • Deeper LSTM Q + norm I is the best model with 58.16% accuracy on open-ended dataset and 63.09% on multiple-choice but far behind the human evaluators (>80% and >90% respectively).
  • +
  • The best model performs well for answers involving common visual objects but performs poorly for answers involving counts.
  • +
  • Vision only model performs even worse than the model which always produces “yes” as the answer.
  • +
diff --git a/_site/site/2017/04/28/Simple-Baseline-for-Visual-Question-Answering.html b/_site/site/2017/04/28/Simple-Baseline-for-Visual-Question-Answering.html new file mode 100644 index 00000000..049ba2b0 --- /dev/null +++ b/_site/site/2017/04/28/Simple-Baseline-for-Visual-Question-Answering.html @@ -0,0 +1,34 @@ +

Problem Statement

+ +
    +
  • VQA Task: Given an image and a free-form, open-ended, natural language question (about the image), produce the answer for the image.
  • +
  • The paper attempts to fine tune the simple baseline method of Bag-of-Words + Image features (iBOWIMG) to make it competitive against more sophisticated LSTM models.
  • +
  • Link to the paper
  • +
+ +

Model

+ +
    +
  • VQA modelled as a classification task where the system learns to choose among one of the top k most prominent answers.
  • +
  • Text Features - Convert input question to a one-hot vector and then transform to word vectors using a word embedding.
  • +
  • Image Features - Last layer activations from GoogLeNet.
  • +
  • Text features are concatenated with image features and fed into a softmax.
  • +
  • Different learning rates and weight clipping for word embedding layer and softmax layer with the learning rate for embedding layer much higher than that of softmax layer.
  • +
+ +

Results

+ +
    +
  • iBOWIMG model reports an accuracy of 55.89% for Open-ended questions and 61.97% for Multiple-Choice questions which is comparable to the performance of other, more sophisticated models.
  • +
+ +

Interpretation of the model

+ +
    +
  • Since the model is very simple, it is possible to interpret the model to know what exactly is the model learning. This is the greatest strength of the paper even though the model is very simple and naive.
  • +
  • The model attempts to memorise the correlation between the answer class and the informative words (in the question) and image features.
  • +
  • Question words generally can influence the answer given the bias in images occurring in COCO dataset.
  • +
  • Given the simple linear transformation being used, it is possible to quantify the importance of each single words (in the question) to the answer.
  • +
  • The paper uses the Class Activation Mapping (CAM) approach (which uses the linear relation between softmax and final image feature map) to highlight the informative image regions relevant to the predicted answer.
  • +
  • While the results reported by the paper are not themselves so significant, the described approach provides a way to interpret the strengths and weakness of different VQA datasets.
  • +
diff --git a/_site/site/2017/05/07/Conditional-Similarity-Networks.html b/_site/site/2017/05/07/Conditional-Similarity-Networks.html new file mode 100644 index 00000000..3d9c83f2 --- /dev/null +++ b/_site/site/2017/05/07/Conditional-Similarity-Networks.html @@ -0,0 +1,103 @@ +

Problem Statement

+ +
    +
  • A common way of measuring image similarity is to embed them into feature spaces where distance acts as a proxy for similarity.
  • +
  • But this feature space can capture one (or a weighted combination) of the many possible notions of similarity.
  • +
  • What if contracting notions of similarity could be captured at the same time - in terms of semantically distinct subspaces.
  • +
  • The paper proposes a new architecture called as Conditional Similarity Networks (CSNs) which learns a disentangled embedding such that the features, for different notions of similarity, are encoded into separate dimensions.
  • +
  • It jointly learns masks (or feature extractors) that select and reweights relevant dimensions to induce a subspace that encodes a specific notion of similarity.
  • +
  • Link to the paper
  • +
+ +

Conditional Similarity Networks

+ +
    +
  • Given an image, x, learn a non-linear feature embedding f(x) such that for any 2 images x1 and x2, the euclidean distance between f(x1) and f(x2) reflects their similarity.
  • +
+ +

Conditional Similarity Triplets

+ +
    +
  • Given a triplet of images (x1, x2, x3) and a condition c (the notion of similarity), an oracle (say crowd) is used to determmine if x1 is more similar to x2 or x3 as per the given criteria c.
  • +
  • In general, for images i, j, l, the triplet t is ordered {i, j, l | c} if i is more similar to j than l.
  • +
+ +

Learning From Triplets

+ +
    +
  • Define a loss function LT() to model the similarity structure over the triplets.
  • +
  • LT(i, j, l) = max{0, D(i, j) - D(i, l) + h} where D is the euclidean distance function and h is the similarity scalar margin to prevent trivial solutions.
  • +
  • To model conditional similarities, masks m are defined as m = σ(β) where σ is the RELU unit and β is a set of parameters to be learnt.
  • +
  • mc denotes the selection of the c-th mask column from feature vector. It thus acts as an element-wise gating function which selects the relevant dimensions of the embedding to attend to a particular similarity concept.
  • +
  • The euclidean function D now computes the masked distance (f(i, c)mc) between the two given images.
  • +
  • Two regularising terms are also added - L2 norm for D and L1 norm for m.
  • +
+ +

Experiments

+ +

Datasets

+ +
    +
  • Fonts dataset by Bernhardsson +
      +
    • 3.1 million 64 by 64-pixel grey scale images.
    • +
    +
  • +
  • Zappos50k shoe dataset +
      +
    • Contains 50,000 images of individual richly annotated shoes.
    • +
    • Characteristics of interest: +
        +
      • Type of the shoes (i.e., shoes, boots, sandals or slippers)
      • +
      • Suggested gender of the shoes (i.e., for women, men, girls or boys)
      • +
      • Height of the shoes’ heels (0 to 5 inches)
      • +
      • Closing mechanism of the shoes (buckle, pull on, slip on, hook and loop or laced up)
      • +
      +
    • +
    +
  • +
+ +

Models

+ +
    +
  • Initial model for the experiments is a ConvNet pre-trained on ImageNet
  • +
  • Standard Triplet Network +
      +
    • Learn from all available triplets jointly as if they have the same notion of similarity.
    • +
    +
  • +
  • Set of Task Specific Triplet Networks +
      +
    • Train n separate triplet networks such that each is trained on a single notion of similarity.
    • +
    • Needs far more parameters and compute.
    • +
    +
  • +
  • Conditional Similarity Networks - fixed disjoint masks +
      +
    • In this version, only the convolutional filters and the embedding is learnt and masks are predefined to be disjoint.
    • +
    • Aims to learn a fully disjoint embedding.
    • +
    +
  • +
  • Conditional Similarity Networks - learned masks +
      +
    • Learns all the components - conv filters, embedding and the masks.
    • +
    +
  • +
  • Refer paper for details on hyperparameters.
  • +
+ +

Results

+ +
    +
  • Visual exploration of the learned subspaces (t-sne visualisation) show that network successfully disentangles different features in the embedded vector space.
  • +
  • The learned masks are very sparse and share dimensions. This shows that CSNs may learn to only use the required number of dimensions thereby doing away with the need of picking the right size of embedding.
  • +
  • Order of performance: +
      +
    • CSNs with learned masks > CSNs with fixed masks > Task-specific networks > standard triplet network.
    • +
    • Though CSNs with learned masks require more training data.
    • +
    +
  • +
  • CSNs also outperform Standard Triplet Network when used as off the shelf features for (brand) classification task and is very close to the performance of ResNet trained on ImageNet.
  • +
  • This shows that while CSN retained most of the information in the original network, the training mechanism of Standard Triplet Network hurts the underlying conv features and their generalising capability
  • +
diff --git a/_site/site/2017/05/14/Making-the-V-in-VQA-Matter-Elevating-the-Role-of-Image-Understanding-in-Visual-Question-Answering.html b/_site/site/2017/05/14/Making-the-V-in-VQA-Matter-Elevating-the-Role-of-Image-Understanding-in-Visual-Question-Answering.html new file mode 100644 index 00000000..f1e7cfb1 --- /dev/null +++ b/_site/site/2017/05/14/Making-the-V-in-VQA-Matter-Elevating-the-Role-of-Image-Understanding-in-Visual-Question-Answering.html @@ -0,0 +1,38 @@ +

Problem Statement

+ +
    +
  • Standard VQA models benefit from the inherent bias in the structure of the world and the language of the question.
  • +
  • For example, if the question starts with “Do you see a …”, it is more likely to be “yes” than “no”.
  • +
  • To truly assess the capability of any VQA system, we need to have evaluation tasks that require the use of both the visual and the language modality.
  • +
  • The authors present a balanced version of VQA dataset where each question in the dataset is associated with a pair of similar images such that the same question would give different answers on the two images.
  • +
  • The proposed data collection procedure enables the authors to develop a novel interpretable model which, given an image and a question, identifies an image that is similar to the original image but has a different answer to the same question thereby building trust for the system.
  • +
  • Link to the paper
  • +
+ +

Dataset Collection

+ +
    +
  • Given an (image, question, answer) triplet (I, Q, A) from the VQA dataset, a human worker (on AMT) is asked to identify an image I’ which is similar to I but for which the answer to question Q is A’ (different from A).
  • +
  • To facilitate the search for I’, the worker is shown 24 nearest-neighbor images of I (based on VGGNet features) and is asked to choose the most similar image to I, for which Q makes sense and answer for Q is different than A. In case none of the 24 images qualifies, the worker may select “not possible”.
  • +
  • In the second round, the workers were asked to answer Q for I’.
  • +
  • This 2-stage protocol results in a significantly more balanced dataset than the previous dataset.
  • +
+ +

Observation

+ +
    +
  • State-of-the-art models trained on unbalanced VQA dataset perform significantly worse on the new, balanced dataset indicating that those models benefitted from the language bias in the older dataset.
  • +
  • Training on balanced dataset improves performance on the unbalanced dataset.
  • +
  • Further, the VQA model, trained on the balanced dataset, learns to differentiate between otherwise similar images.
  • +
+ +

Counter-example Explanations

+ +
    +
  • Given an image and a question, the model not only answers the question, it also provides an image (from the k nearest neighbours of I, based on VGGNet features) which is similar to the input image but for which the model would have given different answer for the same image.
  • +
  • Supervising signal is provided by the data collection procedure where humans pick the image I’ from the same set of candidate images.
  • +
  • For each image in the candidate set, compute the inner product of question-image embedding and answer embedding.
  • +
  • The K inner product values are passed through a fully connected layer to generate K scores.
  • +
  • Trained with pairwise hinge ranking loss so that the score of the human picked image is higher than the score of all other images by a margin of M (hyperparameter).
  • +
  • The proposed explanation model achieves a recall@5 of 43.49%
  • +
diff --git a/_site/site/2017/05/23/Neural-Module-Networks.html b/_site/site/2017/05/23/Neural-Module-Networks.html new file mode 100644 index 00000000..4cc6246c --- /dev/null +++ b/_site/site/2017/05/23/Neural-Module-Networks.html @@ -0,0 +1,74 @@ +

Introduction

+ +
    +
  • For the task of Visual Question Answering, decompose a question into its linguistic substructures and train a neural network module for each substructure.
  • +
  • Jointly train the modules and dynamically compose them into deep networks which can learn to answer the question.
  • +
  • Start by analyzing the question and decide what logical units are needed to answer the question and what should be the relationship between them.
  • +
  • The paper also introduces a new dataset for Visual Question Answering which has challenging, highly compositional questions about abstract shapes.
  • +
  • Link to the paper
  • +
+ +

Inspiration

+ +
    +
  • Questions tend to be compositional.
  • +
  • Different architectures are needed for different tasks - CNNs for object detection, RNNs for counting.
  • +
  • Recurrent and Recursive Neural Networks also use the idea of a different network graph for each input.
  • +
+ +

Neural Module Network for VQA

+ +
    +
  • Training samples of form (w, x, y) +
      +
    • w - Natural Language Question
    • +
    • x - Images
    • +
    • y - Answer
    • +
    +
  • +
  • Model specified by collection of modules {m} and a network layout predictor P.
  • +
  • Model instantiates a network based on P(w) and uses that to encode a distribution P(y|w, x, model_params)
  • +
+ +

Modules

+ +
    +
  • Find: Finds objects of interest.
  • +
  • Transform: Shift regions of attention.
  • +
  • Combine: Merge two attention maps into a single one.
  • +
  • Describe: Map a pair of attention and input image to a distribution over the labels.
  • +
  • Measure: Map attention to a distribution over the labels.
  • +
+ +

Natural Language Question to Networks

+ +
    +
  • Map question to the layout which specifies the set of modules and connections between them.
  • +
  • Assemble the final network using the layout.
  • +
  • Parse the input question to obtain set of dependencies and obtain a representation similar to combinatory logic.
  • +
  • eg “what is the colour of the truck?” becomes “colour(truck)”
  • +
  • The symbolic representation is mapped to a layout: +
      +
    • All leaves become find module.
    • +
    • All internal nodes become transform/combine module.
    • +
    • All root nodes become describe/measure module.
    • +
    +
  • +
+ +

Answering Natural Language Question

+ +
    +
  • Final model combines output from a simple LSTM question encoder with the output of the neural module network.
  • +
  • This helps in modelling the syntactic and semantic regularities of the question.
  • +
+ +

Experiments

+ +
    +
  • Since some modules are updated more frequently than others, adaptive per weight learning rates are better.
  • +
  • The paper introduces a small SHAPES datasets (64 images and 244 unique questions per image).
  • +
  • Neural Module Network achieves a score of 90% on SHAPES dataset while VIS + LSTM baseline achieves an accuracy of 65.3%.
  • +
  • Even on natural images (VQA dataset), the neural module network outperforms the VIS + LSTM baseline.
  • +
+ diff --git a/_site/site/2017/06/03/A-Fast-and-Accurate-Dependency-Parser-using-Neural-Networks.html b/_site/site/2017/06/03/A-Fast-and-Accurate-Dependency-Parser-using-Neural-Networks.html new file mode 100644 index 00000000..e554c0da --- /dev/null +++ b/_site/site/2017/06/03/A-Fast-and-Accurate-Dependency-Parser-using-Neural-Networks.html @@ -0,0 +1,67 @@ +

Introduction

+
    +
  • The paper proposes a neural network classifier to perform transition-based dependency parsing using dense vector representation for the features.
  • +
  • Earlier approaches used a large, manually designed sparse feature vector which took a lot of time and effort to compute and was often incomplete.
  • +
  • Link to the paper
  • +
+ +

Description of the system

+ +
    +
  • The system described in the paper uses arc-standard system (a greedy, transition-based dependency parsing system).
  • +
  • Words, POS tags and arc labels are represented as d dimensional vectors.
  • +
  • Sw, St, Sl denote the set of words, POS and labels respectively.
  • +
  • Neural network takes as input selected words from the 3 sets and uses a single hidden layer followed by Softmax which models the different actions that can be chosen by the arc-standard system.
  • +
  • Uses a cube activation function to allow interaction between features coming from the set of words, POS and labels in the first layer itself. These features come from different embeddings and are not related as such.
  • +
  • Using separate embedding for POS tags and labels allow for capturing aspects like NN (singular noun) should be closer to NNS (plural noun) than DT (determiner).
  • +
  • Input to the network contains words on the stack and buffer and their left and right children (read upon transition-based parsing), their labels and corresponding arc labels.
  • +
  • Output generated by the system is the action to be taken (transition to be performed) when reading each word in the input.
  • +
  • This sequential and deterministic nature of the input-output mapping allows the problem to be modelled as a supervised learning problem and a cross entropy loss can be used.
  • +
  • L2-regularization term is also added to the loss.
  • +
  • During inference, a greedy decoding strategy is used and transition with the highest score is chosen.
  • +
  • The paper mentions a pre-computation trick where matrix computation of most frequent top 10000 words is performed beforehand and cached.
  • +
+ +

Experiments

+ +
    +
  • Dataset +
      +
    • English Penn Treebank (PTB)
    • +
    • Chinese Penn Treebank (CTB)
    • +
    +
  • +
  • Two dependency representations used: +
      +
    • CoNLL Syntactic Dependencies (CD)
    • +
    • Stanford Basic Dependencies (SD)
    • +
    +
  • +
  • Metrics: +
      +
    • Unlabeled Attached Scores (UAS)
    • +
    • Labeled Attached Scores (LAS)
    • +
    +
  • +
  • Benchmarked against: +
      +
    • Greedy arc-eager parser
    • +
    • Greedy arc-standard parser
    • +
    • Malt-Parser
    • +
    • MSTParser
    • +
    +
  • +
  • Results +
      +
    • The system proposed in the paper outperforms all other parsers in both speed and accuracy.
    • +
    +
  • +
+ +

Analysis

+ +
    +
  • Cube function gives a 0.8-1.2% improvement over tanh.
  • +
  • Pretained embeddings give 0.7-1.7% improvement over training embeddings from scratch.
  • +
  • Using POS and labels gives an improvement of 1.7% and 0.4% respectively.
  • +
diff --git a/_site/site/2017/06/17/A-Decomposable-Attention-Model-for-Natural-Language-Inference.html b/_site/site/2017/06/17/A-Decomposable-Attention-Model-for-Natural-Language-Inference.html new file mode 100644 index 00000000..d99ce674 --- /dev/null +++ b/_site/site/2017/06/17/A-Decomposable-Attention-Model-for-Natural-Language-Inference.html @@ -0,0 +1,66 @@ +

Introduction

+ +
    +
  • The paper proposes an attention based mechanism to decompose the problem of Natural Language Inference (NLI) into parallelizable subproblems.
  • +
  • Further, it uses much fewer parameters as compared to any other model while obtaining state of the art results.
  • +
  • Link to the paper
  • +
  • The motivation behind the paper is that the tasks like NLI do not require deep modelling of the sentence structure and comparison of local text substructures followed by aggregation can also work very well
  • +
+ +

Approach

+ +
    +
  • +

    Given two sentences a and b, the model has to predict whether they have an “entailment” relationship, “neutral” relationship or “contradiction” relationship.

    +
  • +
  • Embed +
      +
    • All the words are mapped to their corresponding word vector representation. In subsequent steps, “word” refers to the word vector representation of the actual word.
    • +
    +
  • +
  • Attend +
      +
    • For each word i in a and j in b, obtain unnormalized attention weights *e(i, j)=F(i)TF(j) where F is a feed-forward neural network.
    • +
    • For i, compute a βi by performing softmax-like normalization of j using e(i, j) as the weight and normalizing for all words j in b.
    • +
    • βi captures the subphrase in b that is softly aligned to a.
    • +
    • Similarly compute αj for j.
    • +
    +
  • +
  • Compare +
      +
    • Create two set of comparison vectors, one for a and another for b
    • +
    • For a, v1, i = G(concatenate(i, βi)).
    • +
    • Similarly for b, v2, j = G(concatenate(j, αj))
    • +
    • G is another feed-forward neural network.
    • +
    +
  • +
  • Aggregate +
      +
    • Aggregate over the two set of comparison vectors to obtain v1 and v2.
    • +
    • Feed the aggregated results through the final classifier layer.
    • +
    • Multi-class cross-entropy loss function.
    • +
    +
  • +
  • The paper also explains how this representation can be augmented using intra-sentence attention to the model compositional relationship between words.
  • +
+ +

Computational Complexity

+ +
    +
  • Computationally, the proposed model is asymptotically as good as LSTM with attention.
  • +
  • Assuming that dimensionality of word vectors > length of the sentence (reasonable for the given SNLI dataset), the model is asymptotically as good as regular LSTM.
  • +
  • Further, the model has the advantage of being parallelizable.
  • +
+ +

Experiment

+ +
    +
  • On Stanford Natural Language Inference (SNLI) dataset, the proposed model achieves the state of the art results even when it uses an order of magnitude lesser parameters than the next best model.
  • +
  • Adding intra-sentence attention further improve the test accuracy by 0.5 percent.
  • +
+ +

Notes

+ +
    +
  • A similar approach could be tried on paraphrase detection problem as even that problem should not require very deep sentence representation. Quora Duplicate Question Detection Challenege would have been an ideal dataset but it has a lot of out-of-vocabulary information related to named entities which need to be accounted for.
  • +
diff --git a/_site/site/2017/06/26/Two-Too-Simple-Adaptations-of-Word2Vec-for-Syntax-Problems.html b/_site/site/2017/06/26/Two-Too-Simple-Adaptations-of-Word2Vec-for-Syntax-Problems.html new file mode 100644 index 00000000..c0c28d97 --- /dev/null +++ b/_site/site/2017/06/26/Two-Too-Simple-Adaptations-of-Word2Vec-for-Syntax-Problems.html @@ -0,0 +1,9 @@ +
    +
  • The paper proposes two variants of Word2Vec model so that it may account for syntactic properties of words and perform better on syntactic tasks like POS tagging and dependency parsing.
  • +
  • Link to the paper
  • +
  • In the original Skip-Gram setting, the model predicts the 2c words in the context window (c is the size of the context window). But it uses the same set of parameters whether predicting the word next to the centre word or the word farthest away, thus losing all information about the word order.
  • +
  • Similarly, the CBOW (Continuous Bas Of Words) model just adds the embedding of all the surrounding words thereby losing the word order information.
  • +
  • The paper proposes to use a set of 2c matrices each for a different word in the context window for both Skip-Gram and CBOW models.
  • +
  • This simple trick allows for accounting of syntactic properties in the word vectors and improves the performance of dependency parsing task and POS tagging.
  • +
  • The downside of using this is that now the model has far more parameters than before which increases the training time and needs a large enough corpus to avoid sparse representation.
  • +
diff --git a/_site/site/2017/07/01/One-Model-To-Learn-Them-All.html b/_site/site/2017/07/01/One-Model-To-Learn-Them-All.html new file mode 100644 index 00000000..a06941bd --- /dev/null +++ b/_site/site/2017/07/01/One-Model-To-Learn-Them-All.html @@ -0,0 +1,177 @@ +
    +
  • +

    The current trend in deep learning is to design, train and fine tune a separate model for each problem.

    +
  • +
  • +

    Though multi-task models have been explored, they have been trained for problems from the same domain only and no competitive multi-task, multi-modal models have been proposed.

    +
  • +
  • +

    The paper explores the possibility of such a unified deep learning model that can solve different tasks across multiple domains by training concurrently on them.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Design Philosophy

+ +
    +
  • +

    Small, modality-specific subnetworks (called modality nets) should be used to map input data to a joint representation space and back.

    + +
      +
    • +

      The joint representation is to be of variable size.

      +
    • +
    • +

      Different tasks from the same domain share the modality net.

      +
    • +
    +
  • +
  • +

    MultiModel networks should use computational blocks from different domains even if they are not specifically designed for the task at hand.

    + +
      +
    • Eg the paper reports that attention and mixture-of-experts (MOE) layers slightly improve the performance on ImageNet even though they are not explicitly needed.
    • +
    +
  • +
+ +

Architecture

+ +
    +
  • +

    MulitModel Network consists of few, small modality nets, an encoder, I/O mixer and an autoregressive decoder.

    +
  • +
  • +

    Encoder and decoder use the following computational blocks:

    + +
      +
    • +

      Convolutional Block

      + +
        +
      • ReLU activations on inputs followed by depthwise separable convolutions and layer normalization.
      • +
      +
    • +
    • +

      Attention Block

      + +
        +
      • Multihead, dot product based attention mechanism.
      • +
      +
    • +
    • +

      Mixture-of-Experts (MoE) Block

      + +
        +
      • Consists of simple feed-forward networks (called experts) and a trainable gating network which selects a sparse combination of experts to process the inputs.
      • +
      +
    • +
    • +

      For further details, refer the original paper.

      +
    • +
    +
  • +
  • +

    Encoder consists of 6 conv blocks with a MoE block in the middle.

    +
  • +
  • +

    I/O mixer consists of an attention block and 2 conv blocks.

    +
  • +
  • +

    Decoder consists of 4 blocks of convolution and attention with a MoE block in the middle.

    +
  • +
  • +

    Modality Nets

    + +
      +
    • +

      Language Data

      + +
        +
      • +

        Input is the sequence of tokens ending in a termination token.

        +
      • +
      • +

        This sequence is mapped to correct dimensionality using a learned embedding.

        +
      • +
      • +

        For output, the network takes the decoded output and performs a learned linear mapping followed by Softmax.

        +
      • +
      +
    • +
    • +

      Image and Categorical Data

      + +
        +
      • +

        Uses residual convolution blocks.

        +
      • +
      • +

        Similar to the exit flow for Xception Network

        +
      • +
      +
    • +
    • +

      Audio Data

      + +
        +
      • 1-d waveform over time or 2-d spectrogram operated upon by stack of 8 residual convolution blocks.
      • +
      +
    • +
    +
  • +
+ +

Tasks

+ +
    +
  • +

    WSJ speech corpus

    +
  • +
  • +

    ImageNet dataset

    +
  • +
  • +

    COCO image captioning dataset

    +
  • +
  • +

    WSJ parsing dataset

    +
  • +
  • +

    WMT English-German translation corpus

    +
  • +
  • +

    German-English translation

    +
  • +
  • +

    WMT English-French translation corpus

    +
  • +
  • +

    German-French translation

    +
  • +
+ +

Experiments

+ +
    +
  • +

    The experimental section is not very rigorous with many details skipped (would probably be added later).

    +
  • +
  • +

    While MultiModel does not beat the state of the art models, it does outperform some recent models.

    +
  • +
  • +

    Jointly trained model performs similar to single trained models on tasks with a lot of data and sometimes outperformed single trained models on tasks with less data (like parsing).

    +
  • +
  • +

    Interestingly, jointly training the model for parsing task and Imagenet tasks improves the performance of parsing task even though the two tasks are seemingly unrelated.

    +
  • +
  • +

    Another experiment was done to evaluate the effect of components (like MoE) on tasks (like Imagenet) which do not explicitly need them. It was observed that either the performance either went down or remained the same when MoE component was removed. This indicates that mixing different components does help to improve performance over multiple tasks.

    +
  • +
  • +

    But this observation is not conclusive as a different combination of say the encoder (that does not use MoE) could achieve better performance than one that does. The paper does not explore possibilities like these.

    +
  • +
diff --git a/_site/site/2017/07/09/Ask-Me-Anything-Dynamic-Memory-Networks-for-Natural-Language-Processing.html b/_site/site/2017/07/09/Ask-Me-Anything-Dynamic-Memory-Networks-for-Natural-Language-Processing.html new file mode 100644 index 00000000..3d7cc9a8 --- /dev/null +++ b/_site/site/2017/07/09/Ask-Me-Anything-Dynamic-Memory-Networks-for-Natural-Language-Processing.html @@ -0,0 +1,153 @@ +

Introduction

+ +
    +
  • +

    Dynamic Memory Networks (DMN) is a neural network based general framework that can be used for tasks like sequence tagging, classification, sequence to sequence and question answering requiring transitive reasoning.

    +
  • +
  • +

    The basic idea is that all these tasks can be modelled as question answering task in general and a common architecture could be used for solving them.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Architecture

+ +
    +
  • DMN takes as input a document(sentence, story, article etc) and a question which is to be answered given the document.
  • +
+ +

Input Module

+ +
    +
  • +

    Concatenate all the sentences (or facts) in the document and encode them by feeding the word embeddings of the text to a GRU.

    +
  • +
  • +

    Each time a sentence ends, extract the hidden representation of the GRU till that point and use as the encoded representation of the sentence.

    +
  • +
+ +

Question Module

+ +
    +
  • Similarly, feed the question to a GRU to obtain its representation.
  • +
+ +

Episodic Memory Module

+ +
    +
  • +

    Episodic memory consists of an attention mechanism and a recurrent network with which it updates its memory.

    +
  • +
  • +

    During each iteration, the network generates an episode e by attending over the representation of the sentences, question and the previous memory.

    +
  • +
  • +

    The episodic memory is updated using the current episode and the previous memory.

    +
  • +
  • +

    Depending on the amount of supervision available, the network may perform multiple passes. eg, in the bAbI dataset, some tasks specify how many passes would be needed and which sentence should be attended to in each pass. For others, a fixed number of passes are made.

    +
  • +
  • +

    Multiple passes allow the network to perform transitive inference.

    +
  • +
+ +

Attention Mechanism

+ +
    +
  • +

    Given the input representation c, memory m and question q, produce a scalar score using a 2-layer feedforward network, to use as attention mechanism.

    +
  • +
  • +

    A separate GRU encodes the input representation and weights it by the attention.

    +
  • +
  • +

    Final state of the GRU is fed to the answer module.

    +
  • +
+ +

Answer Module

+ +
    +
  • Use a GRU (initialized with the final state of the episodic module) and at each timestep, feed it the question vector, last hidden state of the same GRU and the previously predicted output.
  • +
+ +

Training

+ +
    +
  • There are two possible losses: +
      +
    • Cross-entropy loss of the predicted answer (all datasets)
    • +
    • Cross-entropy loss of the attention supervision (for datasets like bAbI)
    • +
    +
  • +
+ +

Experiments

+ +

Question Answering

+ +
    +
  • +

    bAbI Dataset

    +
  • +
  • +

    For most tasks, DMN either outperforms or performs as good as Memory Networks.

    +
  • +
  • +

    For tasks like answering with 2 or 3 supporting facts, DMN lags because of limitation of RNN in modelling long sentences.

    +
  • +
+ +

Text Classification

+ +
    +
  • +

    Stanford Sentiment Treebank Dataset

    +
  • +
  • +

    DMN outperforms all the baselines for both binary and fine-grained sentiment analysis.

    +
  • +
+ +

Sequence Tagging

+ +
    +
  • +

    Wall Street Journal Dataset

    +
  • +
  • +

    DMN archives state of the art accuracy of 97.56%

    +
  • +
+ +

Observations

+ +
    +
  • +

    Multiple passes help in reasoning tasks but not so much for sentiment/POS tags.

    +
  • +
  • +

    Attention in the case of 2-iteration DMN is more focused than attention in 1-iteration DMN.

    +
  • +
  • +

    For 2-iteration DMN, attention in the second iteration focuses only on relevant words and less attention is paid to words that lose their relevance in the context of the entire document.

    +
  • +
+ +

Notes

+ +
    +
  • +

    It would be interesting to put some mechanism in place to determine the number of episodes that should be generated before an answer is predicted. A naive way would be to predict the answer after each episode and check if the softmax score of the predicted answer is more than a threshold.

    +
  • +
  • +

    Alternatively, the softmax score and other information could be fed to a Reinforcement Learning (RL) agent which decided if the document should be read again. So every time an episode is generated, the state is passed to the RL agent which decides if another iteration should be performed. If it decides to predict the answer and correct answer is generated, the agent gets a large +ve reward else a large -ve reward.

    +
  • +
  • +

    To discourage unnecessary iterations, a small -ve reward could be given everytime the agent decides to perform another iteration.

    +
  • +
diff --git a/_site/site/2017/07/17/Principled-Detection-of-Out-of-Distribution-Examples-in-Neural-Networks.html b/_site/site/2017/07/17/Principled-Detection-of-Out-of-Distribution-Examples-in-Neural-Networks.html new file mode 100644 index 00000000..90ce6089 --- /dev/null +++ b/_site/site/2017/07/17/Principled-Detection-of-Out-of-Distribution-Examples-in-Neural-Networks.html @@ -0,0 +1,120 @@ +

Problem Statement

+ +
    +
  • +

    Given a pre-trained neural network, which is trained using data from some distribution P (referred to as in-distribution data), the task is to detect the examples coming from a distribution Q which is different from P (referred to as out-of-distribution data).

    +
  • +
  • +

    For example, if a digit recognizer neural network is trained using MNIST images, an out-of-distribution example would be images of animals.

    +
  • +
  • +

    Neural Networks can make high confidence predictions even in such cases where the input is unrecognisable or irrelevant.

    +
  • +
  • +

    The paper proposes ODIN which can detect such out-of-distribution examples without changing the pre-trained model itself.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

ODIN

+ +
    +
  • +

    Uses 2 major techniques

    + +
      +
    • Temperature Scaling +
        +
      • +

        Softmax classifier for the classification network can be written as:

        + +

        pi(x, T) = exp(fi(x)/T) / sum(exp(fj(x)/T))

        +
      • +
      + +

      where x is the input, p is the softmax probability and T is the temperature scaling parameter.

      + +
        +
      • Increasing T (up to some extent) boosts the performance in distinguishing in-distribution and out-of-distribution examples.
      • +
      +
    • +
    • Input Preprocessing +
        +
      • +

        Add small perturbations to the input (image) before feeding it into the network.

        +
      • +
      • +

        x_perturbed = x - ε * sign(-δxlog(py(x, T)))

        +
      • +
      + +

      where ε is the perturbation magnitude

      + +
        +
      • The perturbations are such that softmax scores between in-distribution and out-of-distribution samples become separable.
      • +
      +
    • +
    +
  • +
  • Given an input (image), first perturb the input.
  • +
  • Feed the perturbed input to the network to get its softmax score.
  • +
  • If the softmax score is greater than some threshold, mark the input as in-distribution and feed in the unperturbed version of the input to the network for classification.
  • +
  • Otherwise, mark the input as out-of-distribution.
  • +
  • For detailed mathematical treatment, refer section 6 and appendix in the paper
  • +
+ +

Experiments

+ +
    +
  • +

    Code available on github

    +
  • +
  • +

    Models

    + +
      +
    • DenseNet with depth L = 100 and growth rate k = 12
    • +
    • Wide ResNet with depth = 28 and widen factor = 10
    • +
    +
  • +
  • +

    In-Distribution Datasets

    + +
      +
    • CIFAR-10
    • +
    • CIFAR-100
    • +
    +
  • +
  • +

    Out-of-Distribution Datasets

    + +
      +
    • TinyImageNet
    • +
    • LSUN
    • +
    • iSUN
    • +
    • Gaussian Noise
    • +
    +
  • +
  • +

    Metrics

    + +
      +
    • False Positive Rate at 95% True Positive Rate
    • +
    • Detection Error - minimum misclassification probability over all thresholds
    • +
    • Area Under the Receiver Operating Characteristic Curve
    • +
    • Area Under the Precision-Recall Curve
    • +
    +
  • +
  • +

    ODIN outperforms the baseline across all datasets and all models by a good margin.

    +
  • +
+ +

Notes

+ +
    +
  • Very simple and straightforward approach with theoretical justification under some conditions.
  • +
  • Limited to examples from Vision so can not judge its applicability for NLP tasks.
  • +
diff --git a/_site/site/2017/07/24/ReasoNet-Learning-to-Stop-Reading-in-Machine-Comprehension.html b/_site/site/2017/07/24/ReasoNet-Learning-to-Stop-Reading-in-Machine-Comprehension.html new file mode 100644 index 00000000..e2d5240e --- /dev/null +++ b/_site/site/2017/07/24/ReasoNet-Learning-to-Stop-Reading-in-Machine-Comprehension.html @@ -0,0 +1,129 @@ +

Introduction

+ +
    +
  • +

    In the domain of machine comprehension, making multiple passes over the given document is an effective technique to extract the relation between the given passage, question and answer.

    +
  • +
  • +

    Unlike previous approaches, which perform a fixed number of passes over the passage, Reasoning Network (ReasoNet) uses reinforcement learning (RL) to decide how many times a document should be read.

    +
  • +
  • +

    Every time the document is read, ReasoNet determines whether the document should be read again or has the termination state been reached. If termination state is reached, the answer module is triggered to generate the answer.

    +
  • +
  • +

    Since the termination state is discrete and not connected to the final output, RL approach is used.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Datasets

+ +
    +
  • +

    CNN, DailyMail Dataset

    +
  • +
  • +

    SQuAD

    +
  • +
  • +

    Graph Reachability Dataset

    +
      +
    • 2 synthetic datasets to test if the network can answer questions like “Is node_1 connected to node_12”?
    • +
    +
  • +
+ +

Architecture

+ +
    +
  • +

    Memory (M) - Comprises of the vector representation of the document and the question (encoded using GRU or other RNNs).

    +
  • +
  • +

    Attention - Attention vector (xt) is a function of current internal state st and external memory M. The state and memory are passed through FCs and fed to a similarity function.

    +
  • +
  • +

    Internal State (st) - Vector representation of the question state computed by a RNN using the previous internal state and the attention vector xt

    +
  • +
  • +

    Termination Gate (Tt) - Uses a logistic regression model to generate a random binary variable using the current internal state st.

    +
  • +
  • Answer - Answer module is triggered when Tt = 1. +
      +
    • For CNN and DailyMail, a linear projection of GRU outputs is used to predict the answer from candidate entities.
    • +
    • For SQuAD, the position of the first and the last word from the answer span are predicted.
    • +
    • For Graph Reachability, a logistic regression module is used to predict yes/no as the answer.
    • +
    +
  • +
  • +

    Reinforcement Learning - For the RL setting, reward at time t, rt = 1 if Tt = 1 and answer is correct. Otherwise rt = 0

    +
  • +
  • +

    Workflow - Given a passage p, query q and answer a:

    + +
      +
    • +

      Extract memory using p

      +
    • +
    • +

      Extract initial hidden state using q

      +
    • +
    • +

      ReasoNet executes all possible episodes that can be enumerated by setting an upper limit on the number of passes.

      +
    • +
    • +

      These episodes generate actions and answers that are used to train the ReasoNet.

      +
    • +
    +
  • +
  • +

    Result

    + +
      +
    • +

      CNN, DailyMail Corpus

      + +
        +
      • ReasoNet outperforms all the baselines which use fixed number of reasoning steps and could benefit by capturing the word alignment signals between query and passage.
      • +
      +
    • +
    • +

      SQuAD

      + +
        +
      • At the time of submission, ReasoNet was ranked 2nd on the SQuAD leaderboard and as of 9th July 2017, it is ranked 4th.
      • +
      +
    • +
    • +

      Graph Reachability Dataset

      + +
        +
      • +

        ReasoNet - Standard ReasoNet as described above.

        +
      • +
      • +

        ReasoNet-Last - Use the prediction from the Tmax

        +
      • +
      • +

        ReasoNet > ReasoNet-Last > Deep LSTM Reader

        +
      • +
      • +

        ReasoNet converges faster than ReasoNet-Last indicating that the terminate gate is useful.

        +
      • +
      +
    • +
    +
  • +
  • +

    Notes

    + +
      +
    • As such there is nothing discouraging the ReasoNet to make unnecessary passes over the passage.
    • +
    • In fact, the modal value of the number of passes = upper bound on the number of passes.
    • +
    • This effect is more prominent for large graph indicating that the ReasoNet may try to play safe by performing extra passes.
    • +
    • It would be interesting to see if the network can be discouraged from making unnecessary passed by awarding a small negative reward for each pass.
    • +
    +
  • +
diff --git a/_site/site/2017/08/07/R-NET-Machine-Reading-Comprehension-with-Self-matching-Networks.html b/_site/site/2017/08/07/R-NET-Machine-Reading-Comprehension-with-Self-matching-Networks.html new file mode 100644 index 00000000..12467165 --- /dev/null +++ b/_site/site/2017/08/07/R-NET-Machine-Reading-Comprehension-with-Self-matching-Networks.html @@ -0,0 +1,90 @@ +

Introduction

+ +
    +
  • +

    R-NET is an end-to-end trained neural network model for machine comprehension.

    +
  • +
  • +

    It starts by matching the question and the given passage (using gated attention based RNN) to obtain question-aware passage representation.

    +
  • +
  • +

    Next, it uses a self-matching attention mechanism to refine the passage representation by matching the passage against itself.

    +
  • +
  • +

    Lastly, it uses pointer networks to determine the position of the answer in the passage.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Datasets

+ +
    +
  • +

    SQuAD

    +
  • +
  • +

    MS-MARCO

    +
  • +
+ +

Architecture

+ +
    +
  • +

    Question / Passage Encoder

    + +
      +
    • Concatenate the word level and character level embeddings for each word and feed into a bidirectional GRU to obtain question and passage representation.
    • +
    +
  • +
  • +

    Gated Attention based RNN

    + +
      +
    • +

      Given question and passage representation, sentence pair representation is generated via soft-alignment of the words in the question and in the passage.

      +
    • +
    • +

      The newly added gate captures the relation between the question and the current passage word as only some parts of the passage are relevant for answering the given question.

      +
    • +
    +
  • +
  • +

    Self Matching Attention

    + +
      +
    • +

      The passage representation obtained so far would not capture most of the context.

      +
    • +
    • +

      So the current representation is matched against itself so as to collect evidence from the entire passage and encode the evidence relevant to the current passage word and question.

      +
    • +
    +
  • +
  • +

    Output Layer

    + +
      +
    • +

      Use pointer network (initialized using attention pooling over answer representation) to predict the position of the answer.

      +
    • +
    • +

      Loss function is the sum of negative log probabilities of start and end positions.

      +
    • +
    +
  • +
  • +

    Results

    + +
      +
    • +

      R-NET is ranked second on SQuAD Leaderboard as of 7th August, 2017 and achieves best-published results on MS-MARCO dataset.

      +
    • +
    • +

      Using ideas like sentence ranking, using syntax information performing multihop inference and augmenting question dataset (using seqToseq network) do not help in improving the performance.

      +
    • +
    +
  • +
diff --git a/_site/site/2017/08/21/Learning-to-Compute-Word-Embeddings-On-the-Fly.html b/_site/site/2017/08/21/Learning-to-Compute-Word-Embeddings-On-the-Fly.html new file mode 100644 index 00000000..fe327be6 --- /dev/null +++ b/_site/site/2017/08/21/Learning-to-Compute-Word-Embeddings-On-the-Fly.html @@ -0,0 +1,79 @@ +

Introduction

+ +
    +
  • +

    Word based language models suffer from the problem of rare or Out of Vocabulary (OOV) words.

    +
  • +
  • +

    Learning representations for OOV words directly on the end task often results in poor representation.

    +
  • +
  • +

    The alternative is to replace all the rare words with a single, unique representation (loss of information) or use character level models to obtain word representations (they tend to miss on the semantic relationship).

    +
  • +
  • +

    The paper proposes to learn a network that can predict the representations of words using auxiliary data (referred to as definitions) such as dictionary definitions, Wikipedia infoboxes, the spelling of the word etc.

    +
  • +
  • +

    The auxiliary data encoders are trained jointly with the end task to ensure that word representations align with the requirements of the end task.

    +
  • +
+ +

Approach

+ +
    +
  • +

    Given a rare word w, let d(w) = <x1, x2…> denote its defination where xi are words.

    +
  • +
  • +

    d(w) is fed to a defination reader network f (LSTM) and its last state is used as the defination embedding ed(w)

    +
  • +
  • +

    In case w has multiple definitions, the embeddings are combined using mean pooling.

    +
  • +
  • +

    The approach can be extended to in-vocabulary words as well by using the definition embedding of such words to update their original embeddings.

    +
  • +
+ +

Experiments

+ +
    +
  • Auxiliary data sources +
      +
    • Word definitions from WordNet
    • +
    • Spelling of words
    • +
    +
  • +
  • +

    The proposed approach was tested on following tasks:

    + + +
  • +
  • +

    For all the tasks, models using both spelling and dictionary (SD) outperformed the model using just one.

    +
  • +
  • While SD does not outperform the Glove model (with full vocabulary), it does bridge the performance gap significantly.
  • +
+ +

Future Work

+ +
    +
  • +

    Multi-token words like “San Francisco” are not accounted for now.

    +
  • +
  • +

    The model does not handle the rare words which appear in the definition and just replaces them by the token. Making the model recursive would be a useful addition.

    +
  • +
diff --git a/_site/site/2017/08/27/Pointer-Networks.html b/_site/site/2017/08/27/Pointer-Networks.html new file mode 100644 index 00000000..b0419516 --- /dev/null +++ b/_site/site/2017/08/27/Pointer-Networks.html @@ -0,0 +1,64 @@ +

Introduction

+ +
    +
  • +

    The paper introduces a novel architecture that generates an output sequence such that the elements of the output sequence are discrete tokens corresponding to positions in the input sequence.

    +
  • +
  • +

    Such a problem can not be solved using Seq2Seq or Neural Turing Machines as the size of the output softmax is variable (as it depends on the size of the input sequence).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Architecture

+ +
    +
  • +

    Traditional attention-base sequence-to-sequence models compute an attention vector for each step of the output decoder and use that to blend the individual context vectors of the input into a single, consolidated attention vector. This attention vector is used to compute a fixed size softmax.

    +
  • +
  • +

    In Pointer Nets, the normalized attention vector (over all the tokens in the input sequence) is normalized and treated as the softmax output over the input tokens.

    +
  • +
  • +

    So Pointer Net is a very simple modification of the attention model.

    +
  • +
+ +

Application

+ +
    +
  • +

    Any problem where the size of the output depends on the size of the input because of which fixed length softmax is ruled out.

    +
  • +
  • +

    eg combinatorial problems such as planar convex hull where the size of the output would depend on the size of the input.

    +
  • +
+ +

Evaluation

+ +
    +
  • +

    The paper considers the following 3 problems:

    + +
      +
    • Convex Hull
    • +
    • Delaunay triangulations
    • +
    • Travelling Salesman Problem (TSP)
    • +
    +
  • +
  • +

    Since some of the problems are NP hard, the paper considers approximate solutions whereever the exact solutions are not feasible to compute.

    +
  • +
  • +

    The authors used the exact same architecture and model parameters of all the instances of the 3 problems to show the generality of the model.

    +
  • +
  • +

    The proosed Pointer Nets outperforms LSTMs and LSTMs with attention and can generalise quite well for much larger sequences.

    +
  • +
  • +

    Interestingly, the order in which the inputs are fed to the system affects its performance. The authors discussed this apsect in their subsequent paper titled Order Matters: Sequence To Sequence for Sets

    +
  • +
diff --git a/_site/site/2017/09/22/Refining-Source-Representations-with-Relation-Networks-for-Neural-Machine-Translation.html b/_site/site/2017/09/22/Refining-Source-Representations-with-Relation-Networks-for-Neural-Machine-Translation.html new file mode 100644 index 00000000..ae052feb --- /dev/null +++ b/_site/site/2017/09/22/Refining-Source-Representations-with-Relation-Networks-for-Neural-Machine-Translation.html @@ -0,0 +1,85 @@ +

Introduction

+ +
    +
  • The paper introduces Relation Network (RN) that refines the encoding representation of the given source document (or sentence).
  • +
  • This refined source representation can then be used in Neural Machine Translation (NMT) systems to counter the problem of RNNs forgetting old information.
  • +
  • Link to the paper
  • +
+ +

Limitations of existing NMT models

+ +
    +
  • The RNN encoder-decoder architecture is the standard choice for NMT systems. But the RNNs are prone to forgetting old information.
  • +
  • In NMT models, the attention is modeled in the unit of words while the use of phrases (instead of words) would be a better choice.
  • +
  • While NMT systems might be able to capture certain relationships between words, they are not explicitly designed to capture such information.
  • +
+ +

Contributions of the paper

+ +
    +
  • Learn the relationship between the source words using the context (neighboring words).
  • +
  • Relation Networks (RNs) build pairwise relations between source words using the representations generated by the RNNs. The RN would sit between the encoder and the attention layer of the encoder-decoder framework thereby keeping the main architecture unaffected.
  • +
+ +

Relation Network

+ +
    +
  • Neural network which is desgined for relational reasoning.
  • +
  • Given a set of inputs * O = o1, …, on *, RN is formed as a composition of inputs: + RN(O) = f(sum(g(oi, oj))), f and g are functions used to learn the relations (feed forward networks)
  • +
  • g learns how the objects are related hence the name “relation”.
  • +
  • Components: +
      +
    • CNN Layer +
        +
      • Extract information from the words surrounding the given word (context).
      • +
      • The final output of this layer is the sequence of vectors for different kernel width.
      • +
      +
    • +
    • Graph Propagation (GP) Layer +
        +
      • Connect all the words with each other in the form of a graph.
      • +
      • Each output vector from the CNN corresponds to a node in the graph and there is an edge between all possible pair of nodes.
      • +
      • The information flows between the nodes of the graph in a message passing sort of fashion (graph propagation) to obtain a new set of vectors for each node.
      • +
      +
    • +
    • Multi-Layer Perceptron (MLP) Layer +
        +
      • The representation from the GP Layer is fed to the MLP layer.
      • +
      • The layer uses residual connections from previous layers in form of concatenation.
      • +
      +
    • +
    +
  • +
+ +

Datasets

+ +
    +
  • IWSLT Data - 44K sentences from tourism and travel domain.
  • +
  • NIST Data - 1M Chinese-English parallel sentence pairs.
  • +
+ +

Models

+ +
    +
  • MOSES - Open source translation system - http://www.statmt.org/moses/
  • +
  • NMT - Attention based NMT
  • +
  • NMT+ - NMT with improved decoder
  • +
  • TRANSFORMER - Google’s new NMT
  • +
  • RNMT+ - Relation Network integrated with NMT+
  • +
+ +

Evaluation Metric

+ +
    +
  • case-insensitive 4-gram BLEU score
  • +
+ +

Observations

+ +
    +
  • As sentences become larger (more than 50 words), RNMT clearly outperforms other baselines.
  • +
  • Qualitative evaluation shows that RNMT+ model captures the word alignment better than the NMT+ models.
  • +
  • Similarly, NMT+ system tends to miss some information from the source sentence (more so for longer sentences). While both CNNs and RNNs are weak at capturing long-term dependency, using the relation layer mitigates this issue to some extent.
  • +
diff --git a/_site/site/2017/10/01/Task-Oriented-Query-Reformulation-with-Reinforcement-Learning.html b/_site/site/2017/10/01/Task-Oriented-Query-Reformulation-with-Reinforcement-Learning.html new file mode 100644 index 00000000..42ea0bcc --- /dev/null +++ b/_site/site/2017/10/01/Task-Oriented-Query-Reformulation-with-Reinforcement-Learning.html @@ -0,0 +1,114 @@ +

Introduction

+ +
    +
  • The paper introduces a query reformulation system that rewrites a query to maximise the number of “relevant” documents that are extracted from a given black box search engine.
  • +
  • A Reinforcement Learning (RL) agent selects the terms that are to be added to the reformulated query and the rewards are decided on the basis of document recall.
  • +
  • Link to the paper
  • +
  • Implementation
  • +
+ +

Key Aspect

+ +
    +
  • The underlying problem is as follows: when the end user makes a query to a search engine, the engine often relies on word matching techniques to perform retrieval. This means relevant documents could be missed if there is no exactly matching words between the query and the document.
  • +
  • This problem can be handled at two levels: First, the search engine itself takes care of query semantics. Alternatively, we assume the search engine to be dumb and instead have a system in place that can improve the original queries (automatic query reformulation).
  • +
  • The paper takes the latter approach and expands the original query by adding terms from the set of retrieved documents (pseudo relevance feedback).
  • +
+ +

Datasets

+ +
    +
  • TREC - Complex Answer Retrieval (TREC-CAR)
  • +
  • Jeopardy Q&A dataset
  • +
  • Microsoft Academic (MSA) dataset - created by the authors using papers crawled from Microsoft Academic API
  • +
+ +

Framework

+ +
    +
  • Query Reformulation task is modeled as an RL problem where: +
      +
    • Environment is the search engine.
    • +
    • Actions are whether a word is to be added to the query or not and if yes, then what word is added.
    • +
    • Reward is the retrieval accuracy.
    • +
    +
  • +
  • The input to the system is a query q0 consisting of a sequence of words w1, …, wn and a candidate term ti with some context words.
  • +
  • Candidate terms are all the terms that appear in the original query and the documents retrieved using the query.
  • +
  • The words are mapped to vectors and then a fixed size representation is obtained for the sequence using CNN’s or RNNs.
  • +
  • Similarly, a representation is obtained for the candidate words by feeding them and their context words to the CNN or RNNs.
  • +
  • Finally, a sigmoidal score is computed for all the candidate words.
  • +
  • An RNN sequentially applies this model to emit query words till an end token is emitted.
  • +
  • Vocabulary is used only from the extracted documents and not the entire vocabulary set, to keep the inference fast.
  • +
+ +

Training

+ +
    +
  • The model is trained using REINFORCE algorithm which minimizes the Ca = (R − R~) * sum(log(P(t|q))) where R~ is the baseline.
  • +
  • Value network minimises Cb = &\alpha(||R-R~||2)
  • +
  • Ca and Cb are minimised using SGD.
  • +
  • An entropy regulation term is added to prevent the probability distribution from reaching the peak.
  • +
+ +

Experiments

+ +

Baseline Methods

+ +
    +
  • +

    Raw - Original query is fed to the search engine without any modification.

    +
  • +
  • +

    Pseudo-Relevance Feedback (PRF-TFIDF) - The query is expanded using the top-N TF-IDF terms.

    +
  • +
  • +

    PRF-Relevance Model (PRF-RM) - Probability of adding token t to the query q0 is given by P(t|q0) = (1 − λ)P′(t|q0) + λ sum (P(d)P(t|d)P(q0|d))

    +
  • +
+ +

Proposed Methods

+ +
    +
  • Supervised Learning +
      +
    • Assumes that the query words contribute indepently to the query retrival performace. (Too strong an assumption).
    • +
    • A term is marked as relevant if (R(new_query) - R(old_query))/R(old_query) > 0.005
    • +
    +
  • +
  • Reinforcement Learning +
      +
    • RL-RNN/CNN - RL Framework + RNN/CNN to encode the input features.
    • +
    • RL-RNN-SEQ - Add a sequential generator.
    • +
    +
  • +
  • Metrics +
      +
    • Recall@K
    • +
    • Precision@K
    • +
    • Mean Average Precision@K
    • +
    +
  • +
  • +

    Reward - The paper uses Recall@K as a reward when training the RL-based models with the argument that the “metric has shown to be effective in improving the other metrics as well”, without any justification though.

    +
  • +
  • +

    SL-Oracle - classifier that perfectly selects terms that will increase performance based on the supervised learning approach.

    +
  • +
  • RL-Oracle - Produces a conservative upper-bound for the performance of the RL Agent. It splits the test data into N subsets and trains an RL agent for each subset. Then, the reward is averaged over all the N subsets.
  • +
+ +

Observations

+ +
    +
  • Reformulation based methods > original query
  • +
  • RL methods > Supervised methods > unsupervised methods
  • +
  • RL-RNN-SEQ performs slightly worse than RL-RNN but is much faster (as it produces shorter queries).
  • +
  • RL-based model benefits from more candidate terms while the classical PRF method quickly saturates.
  • +
+ +

Comments

+ +
    +
  • Interestingly, for each raw query, they carried out the reformulation step just once and not multiple times. The number of times a query is reformulated could also have become a part of the RL framework.
  • +
diff --git a/_site/site/2017/10/15/Reading-Wikipedia-to-Answer-Open-Domain-Questions.html b/_site/site/2017/10/15/Reading-Wikipedia-to-Answer-Open-Domain-Questions.html new file mode 100644 index 00000000..c02b439d --- /dev/null +++ b/_site/site/2017/10/15/Reading-Wikipedia-to-Answer-Open-Domain-Questions.html @@ -0,0 +1,74 @@ +

Introduction

+ +
    +
  • +

    The paper presents a new machine comprehension dataset for question answering in real life setting (say when interacting with Cortana/Siri).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Unique Aspects of the dataset

+ +
    +
  • +

    Existing machine comprehension (MC) datasets are either too small or synthetic (with a distribution different from that or real-questions posted by humans). MARCO questions are sampled from real, anonymized user queries.

    +
  • +
  • +

    Most datasets would provide a comparatively small and clean context to answer the question. In MARCO, the context documents (which may or may not contain the answer) are extracted using Bing from real-world documents. As such the questions and the context documents are noisy.

    +
  • +
  • +

    In general, the answer to the questions are restricted to an entity or text span within the document. In case of MARCO, the human judges are encouraged to generate complete sentences as answers.

    +
  • +
+ +

Dataset Description

+ +
    +
  • +

    First release consists of 100K questions with the aim of releasing 1M questions in the future releases.

    +
  • +
  • +

    All questions are tagged with segment information.

    +
  • +
  • +

    A subset of questions has multiple answers and another subset has no answers at all.

    +
  • +
  • +

    Each record in the dataset contains the following information:

    + +
      +
    • Query - The actual question
    • +
    • Passage - Top 10 contextual passages extracted from web search engine (which may or may not contain the answer to the question).
    • +
    • Document URLs - URLs for the top documents (which are the source of the contextual passages).
    • +
    • Answer - Answer synthesised by human evaluators.
    • +
    • Segment - Query type, description, neumeric, entity, location, person.
    • +
    +
  • +
+ +

Experimental Results

+ +
    +
  • +

    Metrics

    + +
      +
    • Accuracy and precision/recall for numeric questions
    • +
    • ROGUE-L/paraphrasing aware evaluation framework for long, textual answers.
    • +
    +
  • +
  • +

    Among generative models, Memory Networks performed better than seq-to-seq.

    +
  • +
  • +

    In the cloze-style test, ReasoNet achieved an accuracy of approx. 59% while Attention Sum Reader achieved an accuracy of approx 55%.

    +
  • +
  • +

    Current QA systems (including the ones using memory and attention) derive their power from supervised data and are very different from how humans do reasoning.

    +
  • +
  • +

    Imagenet dataset pushed the state-of-the-art performance on object classification to beyond human accuracy. Similar was the case with speech recognition dataset from DARPA which led to the advancement of speech recognition. Having a large, diverse and human-like questions dataset is a fundamental requirement to advance the field and the paper aims to provide just the right kind of dataset.

    +
  • +
diff --git a/_site/site/2017/10/22/Swish-A-self-gated-activation-function.html b/_site/site/2017/10/22/Swish-A-self-gated-activation-function.html new file mode 100644 index 00000000..8eafeb18 --- /dev/null +++ b/_site/site/2017/10/22/Swish-A-self-gated-activation-function.html @@ -0,0 +1,45 @@ +

Introduction

+ +
    +
  • +

    The paper presents a new activation function called Swish with formulation f(x) = x.sigmod(x) and its parameterised version called Swish-β where f(x, β) = 2x.sigmoid(β.x) and β is a training parameter.

    +
  • +
  • +

    The paper shows that Swish is consistently able to outperform RELU and other activations functions over a variety of datasets (CIFAR, ImageNet, WMT2014) though by small margins only in some cases.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Properties of Swish

+ +
    +
  • +

    Plot Of Swish

    +
  • +
  • +

    Smooth, non-monotonic function.

    +
  • +
  • +

    Swish-β can be thought of as a smooth function that interpolates between a linear function and RELU.

    +
  • +
  • +

    Uses self-gating mechanism (that is, it uses its own value to gate itself). Gating generally uses multiple scalar inputs but since self-gating uses a single scalar input, it can be used to replace activation functions which are generally pointwise.

    +
  • +
  • +

    Being unbounded on the x>0 side, it avoids saturation when training is slow due to near 0 gradients.

    +
  • +
  • +

    Being bounded below induces a kind of regularization effect as large, negative inputs are forgotten.

    +
  • +
  • +

    Since the Swish function is smooth, the output landscape and the loss landscape are also smooth. A smooth landscape should be more traversable and less sensitive to initialization and learning rates.

    +
  • +
+ +

Criticism

+ +
    +
  • Swish is much more complicated than ReLU (when weighted against the small improvements that are provided) so it might not end up with as strong an adoption as ReLU.
  • +
diff --git a/_site/site/2017/10/28/HARP-Hierarchical-Representation-Learning-for-Networks.html b/_site/site/2017/10/28/HARP-Hierarchical-Representation-Learning-for-Networks.html new file mode 100644 index 00000000..e71db893 --- /dev/null +++ b/_site/site/2017/10/28/HARP-Hierarchical-Representation-Learning-for-Networks.html @@ -0,0 +1,55 @@ +

Introduction

+ +
    +
  • +

    HARP is an architecture to learn low-dimensional node embeddings by compressing the input graph into smaller graphs.

    +
  • +
  • +

    Link to the paper.

    +
  • +
  • +

    Given a graph G = (V, E), compute a series of successively smaller (coarse) graphs G0, …, GL. Learn the node representations in GL and successively refine the embeddings for larger graphs in the series.

    +
  • +
  • +

    The architecture is independent of the algorithms used to embed the nodes or to refine the node representations.

    +
  • +
  • +

    Graph coarsening technique that preserves global structure

    + +
      +
    • +

      Collapse edges and stars to preserve first and second order proximity.

      +
    • +
    • +

      Edge collapsing - select the subset of E such that no two edges are incident on the same vertex and merge their nodes into a single node and merge their edges as well.

      +
    • +
    • +

      Star collapsing - given star structure, collapse the pairs of neighboring nodes (of the central node).

      +
    • +
    • +

      In practice, first apply star collapsing, followed by edge collapsing.

      +
    • +
    +
  • +
  • +

    Extending node representation from coarse graph to finer graph

    + +
      +
    • +

      Lets say node1 and node2 were merged into node12 during coarsening. First copy the representation of node12 into node1, node2.

      +
    • +
    • +

      Additionally, if hierarchical softmax was used, extend the B-tree such that node12 is replaced by 2 child nodes node1 and node2.

      +
    • +
    • +

      Time complexity for HARP + DeepWalk is O(number of walks * |V|) while for HARP + LINE is O(number of iterations * |E|).

      +
    • +
    • +

      The asymptotic complexity remains the same as the HARP-less version for the two cases.

      +
    • +
    +
  • +
  • +

    Multilabel classification task shows that HAR improves all the node embedding technique with gains up to 14%.

    +
  • +
diff --git a/_site/site/2017/11/05/Word-Representations-via-Gaussian-Embedding.html b/_site/site/2017/11/05/Word-Representations-via-Gaussian-Embedding.html new file mode 100644 index 00000000..4c232c0b --- /dev/null +++ b/_site/site/2017/11/05/Word-Representations-via-Gaussian-Embedding.html @@ -0,0 +1,62 @@ +

Introduction

+ +
    +
  • Existing word embedding models like Skip-Gram, GloVe etc map words to fixed sized vectors in a low dimensional vector space.
  • +
  • This fixed point setting cannot capture uncertainty about representation.
  • +
  • Further, these fixed point vectors are compared with measures like dot product and cosine similarity which are not suitable for capturing asymmetric properties like textual entailment and inclusion.
  • +
  • The paper proposes to learn Gaussian function embeddings (with diagonal covariance) for the word vectors.
  • +
  • This way, the words are mapped to soft regions in the embedding space which enables modeling uncertainty and asymmetric properties like inclusion and uncertainty.
  • +
  • Link to the paper
  • +
  • Implementation
  • +
+ +

Approach

+ +
    +
  • KL divergence is used as the asymmetric distance function for comparing the distributions.
  • +
  • Unlike the Word2Vec model, the proposed model uses ranking-based loss.
  • +
+ +

Similarity Measures used

+ +
    +
  • +

    Symmetric Similarity

    +
  • +
  • For two gaussian distributions, Pi and Pj, compute the inner product E(Pi, Pj) as N(0; meani - meanj, sigmai + sigmaj).
  • +
  • Compute the gradient of mean and sigma with respect to log(E).
  • +
  • +

    The resulting loss function can be interpreted as pushing the means closer which encouraging the two gaussians to be more concentrated.

    +
  • +
  • +

    Asymmetric Similarity

    +
  • +
  • Use KL divergence to encode the context distribution.
  • +
  • The benefit over the symmetric setting is that now entailment type relations can also be modeled.
  • +
  • For example, a low KL divergence from x to y indicates that y can be encoded as x or that y “entails” x.
  • +
+ +

Learning

+ +
    +
  • One of the two notions of similarity is chosen and max-margin is used as the loss function.
  • +
  • Mean is regularized by adding a simple constraint on the L2-norm.
  • +
  • For covariance matrix, the eigenvalues are constrained to lie within a hypercube. This ensures that the positive-definite property of the covariance matrix is maintained while having a constraint on the size.
  • +
+ +

Observations

+ +
    +
  • Polysemous words have higher variance in their word embeddings as compared to specific words.
  • +
  • KL divergence (with diagonal covariance) outperforms other models.
  • +
  • Simple tree hierarchies can also be modeled by embedding into the Gaussian space. A Gaussian is created for each node with randomly initialized mean and the same set of embeddings is used for nodes and context.
  • +
  • For word similarity benchmarks, embeddings with spherical covariance have a slight edge over embeddings with diagonal covariance and outperform the Skip-Gram model in all the cases.
  • +
+ +

Future Work

+ +
    +
  • Use combinations of low rank and diagonal matrices for covariances.
  • +
  • Improved optimisation strategies.
  • +
  • Trying other distributions like Student’s-t distribution.
  • +
diff --git a/_site/site/2017/11/12/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks.html b/_site/site/2017/11/12/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks.html new file mode 100644 index 00000000..b9df89b0 --- /dev/null +++ b/_site/site/2017/11/12/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks.html @@ -0,0 +1,34 @@ +

Introduction

+ +
    +
  • The paper presents the concept of “network motifs” to understand the structural design of a network or a graph.
  • +
  • Link to the paper
  • +
+ +

Idea

+ +
    +
  • +

    A network motif is defined as “a pattern of inter-connections occurring in complex networks in numbers that are significantly higher than those in randomized networks”.

    +
  • +
  • +

    In the practical setting, given an input network, we first create randomized networks which have same single node characteristics (like a number of incoming and outgoing edges) as the input network.

    +
  • +
  • +

    The patterns that occur at a much higher frequency in the input graph (than the randomized graphs) are reported as motifs.

    +
  • +
  • +

    More specifically, the patterns for which the probability of appearing in a randomized network an equal or more number of times than in the real network is lower than a cutoff value (say 0.01).

    +
  • +
+ +

Motivation

+ +
    +
  • +

    Real-life networks exhibit properties like “small world” property ( the majority of nodes are within a distance of fewer than 7 hops from each other) and “scale-free” property (fraction of nodes having k edges decays as a power-law).

    +
  • +
  • +

    Motifs are one such structural property that is exhibited by networks in biochemistry, neurobiology, ecology, and engineering. Further, motifs shared by graphs of different domains are different which hints at the usefulness of motifs as a fundamental structural property of the graph and relates to the process of evolution of the graph.

    +
  • +
diff --git a/_site/site/2017/11/19/Higher-order-organization-of-complex-networks.html b/_site/site/2017/11/19/Higher-order-organization-of-complex-networks.html new file mode 100644 index 00000000..9fbc589a --- /dev/null +++ b/_site/site/2017/11/19/Higher-order-organization-of-complex-networks.html @@ -0,0 +1,62 @@ +

Introduction

+ +
    +
  • +

    The paper presents a generalized framework for graph clustering (clusters of network motifs) on the basis of higher-order connectivity patterns.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    Given a motif M, the framework aims to find a cluster of the set of nodes S such that nodes of S participate in many instances of M and avoid cutting instances of M (that is only a subset of nodes in instances of M appears in S).

    +
  • +
  • +

    Mathematically, the aim is to minimise the motif conductance metric given as cutM(S, S’) / min[volM(S), volM(S’)] where S’ is complement of S, cutM(S, S’) = number of instances of M which have atleast one node from both S and S’ and volM(S) = Number of nodes in instances of M that belong only to S.

    +
  • +
  • +

    Solving the above equation is computationally infeasible and an approximate solution is proposed using eigenvalues and matrices.

    +
  • +
  • +

    The approximate solution is easy to implement, efficient and guaranteed to find clusters that are at most a quadratic factor away from the optimal.

    +
  • +
+ +

Algorithm

+ +
    +
  • +

    Given the network and motif M, form a motif adjacency matrix WM where WM(i, j) is the number of instances of M that contains i and j.

    +
  • +
  • +

    Compute spectral ordering of the nodes from normalized motif laplacian matrix.

    +
  • +
  • +

    Compute prefix set of spectral ordering with small motif conductance.

    +
  • +
+ +

Scalability

+ +
    +
  • Worst case O(m1.5), based on experiments O(m1.2) where m is the number of edges.
  • +
+ +

Advantages

+ +
    +
  • +

    Applicable to directed, undirected and weighted graphs (allows for negative edge weights as well).

    +
  • +
  • +

    In case the motif is not known beforehand, the framework can be used to compute significant motifs.

    +
  • +
  • +

    The proposed framework unifies the two fundamental tools of network science (motif analysis and network partitioning) along with some worst-case guarantees for the approximations employed and can be extended to identify higher order modular organization of networks.

    +
  • +
+ diff --git a/_site/site/2017/11/28/Two-Stage-Synthesis-Networks-for-Transfer-Learning-in-Machine-Comprehension.html b/_site/site/2017/11/28/Two-Stage-Synthesis-Networks-for-Transfer-Learning-in-Machine-Comprehension.html new file mode 100644 index 00000000..49bd9020 --- /dev/null +++ b/_site/site/2017/11/28/Two-Stage-Synthesis-Networks-for-Transfer-Learning-in-Machine-Comprehension.html @@ -0,0 +1,98 @@ +

Introduction

+ +
    +
  • The paper proposes a two-stage synthesis network that can perform transfer learning for the task of machine comprehension.
  • +
  • +

    The problem is the following:

    + +
      +
    • +

      We have a domain DS for which we have labelled dataset of question-answer pairs and another domain DT for which we do not have any labelled dataset.

      +
    • +
    • +

      We use the data for domain DS to train SynNet and use that to generate synthetic question-answer pairs for domain DT.

      +
    • +
    • +

      Now we can train a machine comprehension model M on DS and finetune using the synthetic data for DT.

      +
    • +
    +
  • +
  • Link to the paper
  • +
+ +

SynNet

+ +
    +
  • +

    Works in two stages:

    + +
      +
    • Answer Synthesis - Given a text paragraph, generate an answer.
    • +
    • Question Synthesis - Given a text paragraph and an answer, generate a question.
    • +
    +
  • +
+ +

Answer Synthesis Network

+ +
    +
  • Given the labelled dataset for DS, generate a labelled dataset of <word, tag> pair such that each word in the given paragraph is assigned one of the 4 tags: +
      +
    • IOBstart - if it is the starting word of an answer
    • +
    • IOBmid - if it is the intermediate word of an answer
    • +
    • IOBend - if it is the ending word of an answer
    • +
    • IOBnone - if it is not part of any answer
    • +
    +
  • +
  • +

    For training, map the words to their GloVe embeddings and pass through a Bi-LSTM. Next, pass them through two-FC layers followed by a softmax layer.

    +
  • +
  • For the target domain DT, all the consecutive word spans where no label is IOBnone are returned as candidate answers.
  • +
+ +

Question Synthesis Network

+ +
    +
  • +

    Given an input paragraph and a candidate answer, Question Synthesis network generates question one word at a time.

    +
  • +
  • +

    Map each word in the paragraph to their GloVe embedding. After the word vector, append a ‘1’ if the word was part of the candidate answer else append a ‘0’.

    +
  • +
  • +

    Feed to a Bi-LSTM network (encoder-decoder) where the decoder conditions on the representation generated by the encoder as well as the question tokens generated so far. Decoding is stopped when “END” token is produced.

    +
  • +
  • +

    The paragraph may contain some named entities or rare words which do not appear in the softmax vocabulary. To account for such words, a copying mechanism is also incorporated.

    +
  • +
  • +

    At each time step, a Pointer Network (CP) and a Vocabulary Predictor (VP) are used to generate probability distribution for the next word and a Latent Predictor Network is used to decide which of the two networks would be used for the prediction.

    +
  • +
  • +

    At inference time, a greedy decoding is used where the most likely predictor is chosen and then the most likely word from that predictor is chosen.

    +
  • +
+ +

Machine Comprehension Model

+ +
    +
  • Given any MC model, first train it over domain DS and then fine-tune using the artificial questions generated using DT.
  • +
+ +

Implementation Details

+ +
    +
  • +

    Data Regularization - There is a need to alternate between mini batches from source and target domain while fine-tuning the MC model.

    +
  • +
  • +

    At inference time, the fine-tuned MC model is used to get the distribution P(i=start) and P(i=end) (corresponding to the likelihood of choosing word I as the starting or ending word for the answer) for all the words and DP is used to find the optimal answer span.

    +
  • +
  • +

    Checkpoint Averaging - Use the different checkpointed models to average the answer likelihood before running DP.

    +
  • +
  • +

    Using the synthetically generated dataset helps to gain a 2% improvement in terms of F-score (from SQuAD -> NewsQA). Using checkpointed models further improves the performance to overall 46.6% F score which closes the gap with respect to the performance of model trained on NewsQA itself (~52.3% F score)

    +
  • +
+ diff --git a/_site/site/2017/12/11/Revisiting-Semi-Supervised-Learning-with-Graph-Embeddings.html b/_site/site/2017/12/11/Revisiting-Semi-Supervised-Learning-with-Graph-Embeddings.html new file mode 100644 index 00000000..7e8d636c --- /dev/null +++ b/_site/site/2017/12/11/Revisiting-Semi-Supervised-Learning-with-Graph-Embeddings.html @@ -0,0 +1,86 @@ +

Introduction

+ + + +

Problem Setting

+ +
    +
  • +

    Given a graph G = (V, E) and xL and xU as feature vectors for labelled and unlabelled nodes and yL as labels for the labelled nodes, the problem is to learn a mapping (classifier) f: x -> y

    +
  • +
  • +

    There are two settings possible:

    + +
      +
    • +

      Transductive - Predictions are made only for those nodes which are already observed in the graph at training time.

      +
    • +
    • +

      Inductive - Predictions are made for nodes whether they have been observed in the graph at training time or not.

      +
    • +
    +
  • +
+ +

Approach

+ +
    +
  • +

    The general semi-supervised learning loss would be LS + λLU where LS is the supervised learning loss while LU is the unsupervised learning loss.

    +
  • +
  • +

    The unsupervised loss is a variant of the Skip-gram loss with negative edge sampling.

    +
  • +
  • +

    More specifically, first a random walk sequence S is sampled. Then either a positive edge is sampled from S (within a given context distance) or a negative edge is sampled.

    +
  • +
  • +

    The label information is injected by using the label as a context and minimising the distance between the positive edges (edges where the nodes have the same label) and maximising the distance between the negative edges (edges where the nodes have different labels).

    +
  • +
+ +

Transductive Formulation

+ +
    +
  • +

    Two separate fully connected networks are applied over the node features and node embeddings.

    +
  • +
  • +

    These 2 representations are then concatenated and fed to a softmax classifier to predict the class label.

    +
  • +
+ +

Inductive Formulation

+ +
    +
  • +

    In the inductive setting, it is difficult to obtain the node embeddings at test time. One naive approach is to retrain the network to obtain the embeddings on the previously unobserved nodes but that is inefficient.

    +
  • +
  • +

    The embeddings of node x are parameterized as a function of its input feature vector and is learnt by applying a fully connected neural network on the node feature vector.

    +
  • +
  • +

    This provides a simple way to extend the original approach to the inductive setting.

    +
  • +
+ +

Results

+ +
    +
  • +

    The proposed approach is evaluated in 3 settings (text classification, distantly supervised entity extraction and entity classification) and it consistently outperforms approaches that use just node features or node embeddings.

    +
  • +
  • +

    The key takeaway is that the joint training in the semi-supervised setting has several benefits over the unsupervised setting and that using the graph context (in terms of node embeddings) is much more effective than using graph Laplacian-based regularization term.

    +
  • +
diff --git a/_site/site/2017/12/24/PTE-Predictive-Text-Embedding-through-Large-scale-Heterogeneous-Text-Networks.html b/_site/site/2017/12/24/PTE-Predictive-Text-Embedding-through-Large-scale-Heterogeneous-Text-Networks.html new file mode 100644 index 00000000..efc404a7 --- /dev/null +++ b/_site/site/2017/12/24/PTE-Predictive-Text-Embedding-through-Large-scale-Heterogeneous-Text-Networks.html @@ -0,0 +1,97 @@ +

Introduction

+ +
    +
  • +

    Unsupervised text embeddings can be generalized for different tasks but they have weaker predictive powers (as compared to end-to-end trained deep learning methods) for any particular task. But the deep learning techniques are expensive and need a large amount of supervised data and a large number of parameters to tune.

    +
  • +
  • +

    The paper introduces Predictive Text Embedding (PTE) - a semi-supervised approach which learns an effective low dimensional representation using a large amount of unsupervised data and a small amount of supervised data.

    +
  • +
  • +

    The work can be extended to general information networks as well as classic techniques like MDS, Iso-map, Laplacian EigenMaps etc do not scale well for large graphs.

    +
  • +
  • +

    Further, this model can be applied to heterogeneous networks as well unlike the previous works LINE and DeepWalk which work on homogeneous networks only.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    The paper proposes 3 different kinds of networks:

    + +
      +
    • Word-Word Network which captures the word co-occurrence information (local level).
    • +
    • Word-Document Network which captures the word-document co-occurrence information (local + document level).
    • +
    • Word-Label Network which captures the word-label co-occurrence information (bipartite graph).
    • +
    +
  • +
  • +

    All 3 graphs are integrated into one heterogeneous text network.

    +
  • +
  • +

    First, the authors extend their previous work, LINE, for heterogenous bipartite text networks as explained:

    + +
      +
    • +

      Given a bipartite graph G = (VA \bigcup VB, E) , where VA and VB are disjoint set of vertices, the conditional probability of va (in set VA) being generated by vb (in set VB) is given as the softmax score between embeddings of va and vb and normalised by the sum of exponentials of dot products between vb and all nodes in VA.

      +
    • +
    • + + + + + + + + +
      The second order proximity can be determined by the conditional distributions *p(.vj)*p(.vj)*.
      +
    • +
    • +

      The objective to be minimised the KL divergence between the conditional distribution p(.\vj) and the emperical distribution p^(.\vj) (given as wi, j/degj).

      +
    • +
    • The objective can be further simplified and optimised using SGD with edge sampling and negative sampling.
    • +
    +
  • +
  • +

    Now, the 3 individual networks can all be interpreted as bipartite networks. So node representation of all the 3 individual networks is obtained as described above.

    +
  • +
  • +

    For the word-label network, since the training data is sparse, one could either train the unlabelled networks first and then the labelled network or they all could be trained jointly.

    +
  • +
  • +

    For the case of joint training, the edges are sampled from the 3 networks alternatively.

    +
  • +
  • +

    For the fine-tuning case, the edges are first sampled from the unlabelled network and then from the labelled network.

    +
  • +
  • +

    Once the word embeddings are obtained, the text embeddings may be obtained by simply averaging the word embeddings.

    +
  • +
+ +

Evaluation

+ +
    +
  • +

    Baseline Models

    + +
      +
    • Local word co-occurence based methods - SkipGram, LINE(Gww)
    • +
    • Document word co-occurence based methods - LINE(Gwd), PV-DBOW
    • +
    • Combined method - LINE (Gww + Gwd)
    • +
    • CNN
    • +
    • PTE
    • +
    +
  • +
  • +

    For long documents, PTE (joint) outperforms CNN and other PTE variants and is around 10 times faster than CNN model.

    +
  • +
  • +

    For short documents, PTE (joint) does not always outperform CNN model probably because the word sense ambiguity is more relevant in the short documents.

    +
  • +
diff --git a/_site/site/2017/12/31/Distilling-the-Knowledge-in-a-Neural-Network.html b/_site/site/2017/12/31/Distilling-the-Knowledge-in-a-Neural-Network.html new file mode 100644 index 00000000..50c584ca --- /dev/null +++ b/_site/site/2017/12/31/Distilling-the-Knowledge-in-a-Neural-Network.html @@ -0,0 +1,92 @@ +

Introduction

+ +
    +
  • +

    In machine learning, it is common to train a single large model (with a large number of parameters) or ensemble of multiple smaller models using the same dataset.

    +
  • +
  • +

    While such large models help to improve the performance of the system, they also make it difficult and computationally expensive to deploy the system.

    +
  • +
  • +

    The paper proposes to transfer the knowledge from such “cumbersome” models into a single, “simpler” model which is more suitable for deployment. This transfer of knowledge is referred to as “distillation”.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    Train the cumbersome model using the given training data in the usual way.

    +
  • +
  • +

    Train the simpler, distilled model using the class probabilities (from the cumbersome model) as the soft target. Thus, the simpler model is trained to generalise the same way as the cumbersome model.

    +
  • +
  • +

    If the soft targets have high entropy, they provide much more information than the hard targets and the gradient (between training examples) would vary lesser.

    +
  • +
  • +

    One approach is to minimise the L2 difference between logits produced by the cumbersome model and the simpler model. This approach was pursued by Buciluǎ et al.

    +
  • +
  • +

    The paper proposes a more general solution which they name “distillation”. The temperature of the final softmax is increased till the cumbersome model produces a set of soft targets (from the final softmax layer). These soft targets are then used to train the simpler model.

    +
  • +
  • +

    It also shows that the proposed approach is, in fact, a more general case of the first approach.

    +
  • +
+ +

Approach

+ +
    +
  • +

    In the simplest setting, the cumbersome model is first trained with a high value of temperature and then the same temperature value is used to train the simpler model. The temperature is set to 1 when making predictions using the simpler model.

    +
  • +
  • +

    It helps to add an auxiliary objective function which corresponds to the cross-entropy loss with the correct labels. The second objective function should be given a much lower weight though. Further, the magnitude of the soft targets needs to be scaled by multiplying with the square of temperature.

    +
  • +
+ +

Experiment

+ +
    +
  • +

    The paper reports favourable results for distillation task for the following domains:

    + +
      +
    • +

      Image Classification (on MNIST dataset)

      + +
        +
      • An extra experiment is performed where the simpler model is not shown any images of “3” but the model fails for only 133 cases out of 1010 cases involving “3”.
      • +
      +
    • +
    • +

      Automatic Speech Recognition (ASR)

      + +
        +
      • +

        An extra experiment is performed where the baseline model is trained using both hard targets and soft targets alternatively. Further, only 3% of the total dataset is used.

        +
      • +
      • +

        The model using hard targets overfits and has poor test accuracy while the model using soft targets does not overfit and gets much better test accuracy. This shows the regularizing effect of soft targets.

        +
      • +
      +
    • +
    • +

      Training ensemble specialists for very large datasets (JFT dataset - an internal dataset at Google)

      + +
        +
      • +

        The experiment shows that while training a single large model would take a lot of time, the performance of the model can be improved by learning a small number of specialised networks (which are faster to train).

        +
      • +
      • +

        Though it is yet to be shown that the knowledge of such specialist models can be distilled back into a single model.

        +
      • +
      +
    • +
    +
  • +
diff --git a/_site/site/2018/01/06/How-transferable-are-features-in-deep-neural-networks.html b/_site/site/2018/01/06/How-transferable-are-features-in-deep-neural-networks.html new file mode 100644 index 00000000..bca26258 --- /dev/null +++ b/_site/site/2018/01/06/How-transferable-are-features-in-deep-neural-networks.html @@ -0,0 +1,99 @@ +

Introduction

+ +
    +
  • +

    When neural networks are trained on images, they tend to learn the same kind of features for the first layer (corresponding to Gabor filters or colour blobs). The first layer features are “general” irrespective of the task/optimizer etc.

    +
  • +
  • +

    The final layer features tend to be “specific” in the sense that they strongly depend on the task.

    +
  • +
  • +

    The paper studies the transition of generalization property across layers in the network. This could be useful in the domain of transfer learning where features are reused across tasks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Degree of generality of a set of features, learned on task A, is defined as the extent to which these features can be used for another task B.

    +
  • +
  • +

    Randomly split 1000 ImageNet classes into 2 groups (corresponding to tasks A and B). Each group has 500 classes and half the total number of examples.

    +
  • +
  • +

    Two 8-layer convolutional networks are trained on the two datasets and labelled as baseA and baseB respectively.

    +
  • +
  • +

    Now choose a layer numbered n from {1, 2…7}.

    +
  • +
  • +

    For each layer n, train the following two networks:

    + +
      +
    • Selffer Network BnB +
        +
      • Copy (and freeze) first n layers from baseB. The remaining layers are initialized randomly and trained on B.
      • +
      • This serves as the control group.
      • +
      +
    • +
    • Transfer Network AnB +
        +
      • Copy (and freeze) first n layers from baseA. The remaining layers are initialized randomly and trained on B.
      • +
      • This corresponds to transferring features from A to B.
      • +
      +
    • +
    +
  • +
  • +

    If AnB performs well, nth layer features are “general”.

    +
  • +
  • +

    In another setting, the transferred layers are also fine-tuned (BnB+ and AnB+).

    +
  • +
  • +

    ImageNet dataset contains a hierarchy of classes which allow for creating the datasets A and B with high and low similarity.

    +
  • +
+ +

Observation

+ +

Dataset A and B are similar

+ +
    +
  • +

    For n = {1, 2}, the performance of the BnB model is same as baseB model. For n = {3, 4, 5, 6}, the performance of BnB model is worse.

    +
  • +
  • +

    This indicates the presence of “fragile co-adaption” features on successive layers where features interact with each other in a complex way and can not be easily separated across layers. This is more prominent across middle layers and less across the first and the last layers.

    +
  • +
  • +

    For model AnB, the performance of baseB for n = {1, 2}. Beyond that, the performance begins to drop.

    +
  • +
  • +

    Transfer learning of features followed by fine-tuning gives better results than training the network from scratch.

    +
  • +
+ +

Dataset A and B are dissimilar

+ +
    +
  • Effectiveness of feature transfer decreases as the two tasks become less similar.
  • +
+ +

Random Weights

+ +
    +
  • +

    Instead of using transferred weights in BnB and BnA, the first n layers were initialized randomly.

    +
  • +
  • +

    The performance falls for layer 1 and 2. It further drops to near-random level for layers 3 and beyond.

    +
  • +
  • +

    Another interesting insight is that even for dissimilar tasks, transferring features is better than using random features.

    +
  • +
diff --git a/_site/site/2018/01/14/Exploring-Models-and-Data-for-Image-Question-Answering.html b/_site/site/2018/01/14/Exploring-Models-and-Data-for-Image-Question-Answering.html new file mode 100644 index 00000000..bd678269 --- /dev/null +++ b/_site/site/2018/01/14/Exploring-Models-and-Data-for-Image-Question-Answering.html @@ -0,0 +1,77 @@ +

Introduction

+ +
    +
  • +

    Problem Statement: Given an image, answer a given question about the image.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Assumptions:

    +
      +
    • The answer is assumed to be a single word thereby bypassing the evaluation issues of multi-word generation tasks.
    • +
    +
  • +
+ +

VIS-LSTM Model

+ +
    +
  • Treat the input image as the first word in the question.
  • +
  • Obtain the vector representation (skip-gram) for words in the question.
  • +
  • Obtain the VGG Net embeddings of the image and use a linear transformation (dimensionality reduction weight matrix) to match the dimensions of word embeddings.
  • +
  • Keep image embedding frozen during training and use an LSTM to combine the word vectors.
  • +
  • LSTM outputs are fed into a softmax layer which generates the answer.
  • +
+ +

Dataset

+ +
    +
  • DAtaset for QUestion Ansering on Real-world images (DAQUAR) +
      +
    • 1300 images and 7000 questions with 37 object classes.
    • +
    • Downside is that even guess work can yield good results.
    • +
    +
  • +
  • The paper proposed an algorithm for generating questions using MS-COCO dataset. +
      +
    • Perform preprocessing steps like breaking large sentences and changing indefinite determines to definite ones.
    • +
    • object questions, number questions, colour questions and location questions can be generated by searching for nouns, numbers, colours and prepositions respectively.
    • +
    • Resulting dataset has ~120K questions across above 4 semantic types.
    • +
    +
  • +
+ +

Models

+ +
    +
  • VIS+LSTM - explained above
  • +
  • 2-VIS+BLSTM - Add the image features twice, in beginning and in the end (using different linear transformations) plus use bidirectional LSTM
  • +
  • IMG+BOW - Multinomial logistic regression on image features without dimensionality reduction + bag of words (averaging word vectors).
  • +
  • FULL - Simple average of above 2 models.
  • +
+ +

Baseline

+ +
    +
  • Includes models where the answer is guessed, or only image or question features are used or image features along with prior knowledge of object are used.
  • +
  • Also includes a KNN model where the system finds the nearest (image, question) pair.
  • +
+ +

Metrics

+ +
    +
  • Accuracy
  • +
  • Wu-Palmer similarity measure
  • +
+ +

Observations

+ +
    +
  • The VIS-LSTM model outperforms the baselines while the FULL model benefits from averaging across all the models.
  • +
  • Some useful information seems to be lost when downsizing the VGG vectors.
  • +
  • Fine tuning the word vectors helps with performance.
  • +
  • Normalising CNN hidden image features into zero mean and unit variance leads to faster training.
  • +
  • Model does not perform well on the task of considering spatial relations between multiple objects and counting objects when multiple objects are present
  • +
diff --git a/_site/site/2018/01/22/Emotional-Chatting-Machine-Emotional-Conversation-Generation-with-Internal-and-External-Memory.html b/_site/site/2018/01/22/Emotional-Chatting-Machine-Emotional-Conversation-Generation-with-Internal-and-External-Memory.html new file mode 100644 index 00000000..26ce6549 --- /dev/null +++ b/_site/site/2018/01/22/Emotional-Chatting-Machine-Emotional-Conversation-Generation-with-Internal-and-External-Memory.html @@ -0,0 +1,79 @@ +
    +
  • +

    The paper proposes ECM (Emotional Chatting Machine) which can generate both semantically and emotionally appropriate responses in a dialogue setting.

    +
  • +
  • +

    More specifically, given an input utterance or dialogue and the desired emotional category of the response, ECM is to generate an appropriate response that conforms to the given emotional category.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Much of the recent, deep learning based work on conversational agents has focused on the use of encoder-decoder framework where the input utterance (given sequence of words) is mapped to a response utterance (target sequence of words). This is the so-called seq2seq family of models.

    +
  • +
  • +

    ECM model can sit within this framework and introduces 3 new components:

    + +
      +
    • Emotion Category Embedding +
        +
      • Embed the emotion categories into a real-valued, low-dimensional vector space.
      • +
      • These embeddings are used as input to the decoder and are learnt along with rest of the model.
      • +
      +
    • +
    • Internal Memory +
        +
      • Physiological, emotional responses are relatively short-lived and involve changes.
      • +
      • ECM accounts for this effect by adding an Internal Memory which captures this dynamics of emotions during decoding.
      • +
      • It starts with “full” emotions in the beginning and keeps decaying the emotion value over time.
      • +
      • How much of the emotion value is to be decayed is determined by a sigmoid gate.
      • +
      • By the time the sentence is decoded, the value becomes zero, signifying that the emotion has been completely expressed.
      • +
      +
    • +
    • External Memory +
        +
      • Emotional responses are expected to carry emotionally strong words along with generic, neutral words.
      • +
      • An external memory is used to include the emotionally strong words explicitly by using 2 non-overlapping vocabularies - generic vocabulary and the emotion vocabulary (read from the external memory).
      • +
      • Both these vocabularies are assigned different generation probabilities and an output gate controls the weights of generic and emotion words.
      • +
      • This way the emotion words are included in an otherwise neutral response.
      • +
      +
    • +
    +
  • +
  • +

    Loss function

    + +
      +
    • The first component is the cross-entropy loss between predicted and target token distribution.
    • +
    • A regularization term on internal memory to make sure the emotional state decays to 0 at the end of the decoding process.
    • +
    • Another regularization term on external memory to supervise the probability of selection of a generic vs emotion word.
    • +
    +
  • +
  • *Dataset +
      +
    • STC Dataset (~220K posts and ~4300K responses) annotated by the emotional classifier. Any error on the part of the classifier degrades the quality of the training dataset.
    • +
    • NLPCC Dataset - Emotion classification dataset with 23105 sentences.
    • +
    +
  • +
  • +

    Metric

    + +
      +
    • Perplexity to evaluate the model at the content level.
    • +
    • Emotion accuracy to evaluate the model at the emotional level.
    • +
    +
  • +
  • +

    ECM achieves a perplexity of 65.9 and emotional accuracy of 0.773.

    +
  • +
  • +

    Based on human evaluations, ECM statistically outperforms the seq2seq baselines on both naturalness (likeliness of response being generated by a human) and emotion accuracy.

    +
  • +
  • +

    Notes

    + +
      +
    • It is an interesting idea to let the sigmoid gate decide how the emotion “value” be spent while decoding. It seems similar to the idea of how much do we want to “attend” to the emotion value the key difference being that your total attention is limited. It would be interesting to see the shape of the distribution of how much of the emotion value is spent at each decoding time step. If the curve is highly biased towards say using most of the emotion value towards the end of the decoding process, maybe another regularisation term is needed to ensure a more balanced distribution of how the emotion is spent.
    • +
    +
  • +
diff --git a/_site/site/2018/01/29/StarSpace-Embed-All-The-Things.html b/_site/site/2018/01/29/StarSpace-Embed-All-The-Things.html new file mode 100644 index 00000000..ce16a74e --- /dev/null +++ b/_site/site/2018/01/29/StarSpace-Embed-All-The-Things.html @@ -0,0 +1,57 @@ +

Introduction

+ +
    +
  • +

    The paper describes a general purpose neural embedding model where different type of entities (described in terms of discrete features) are embedded in a common vector space.

    +
  • +
  • +

    A similarity function is learnt to compare these entities in a meaningful way and score their similarity. The definition of the similarity function could depend on the downstream task where the embeddings are used.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the implementation

    +
  • +
+ +

Approach

+ +
    +
  • +

    Each entity is described as a set of discrete features. For example, for the recommendation use case, the users may be described as a bag-of-words of movies they have liked. For the search use case, the document may be described as a bag-of-words of words they are made up of.

    +
  • +
  • +

    Given a dataset and a task at hand, generate a set of positive samples E = (a, b) such that a is the input to the task (from the dataset) and b is the expected label(answer/entity) for the given task.

    +
  • +
  • +

    Similarly, generate another set of negative samples E - = (a, bi-) such that bi- is one of the incorrect label(answer/entity) for the given task. The incorrect entity can be sampled randomly from the set of candidate entities. Multiple incorrect samples could be generated for each positive example. These incorrect samples are indexed using i.

    +
  • +
  • +

    For example, in case of supervised learning problem like document classification, a would be one of the documents (probably described in terms of words), b is the correct label and bi-) is one of the randomly sampled label from set of all the labels (excluding the correct label).

    +
  • +
  • +

    In case of collaborative filtering, a would be the user (either described as a discrete entity like a userid or in terms of items purchased so far), b is the next item the user purchases and bi-) is one of the randomly sampled item from the set of all the items.

    +
  • +
  • +

    A similarity function is chosen to compare the representation of entities of type a and b. The paper considered cosine similarity and inner product and observed that cosine similarity works better for the case with a large number of entities.

    +
  • +
  • +

    A loss function compares the similarity between positive pairs (a, b) and (a, bi-). The paper considered margin ranking loss and negative log loss of softmax and reported that margin ranking loss works better.

    +
  • +
  • +

    The norm of embeddings is capped at 1.

    +
  • +
+ +

Observations

+ +
    +
  • +

    The same model architecture is applied to a variety of tasks including multi-class classification, multi-label classification, collaborative filtering, content-based recommendation, link prediction, information retrieval, word embeddings and sentence embeddings.

    +
  • +
  • +

    The model provides a strong baseline on all the tasks and performs at par with much more complicated and task-specific networks.

    +
  • +
+ diff --git a/_site/site/2018/02/05/Get-To-The-Point-Summarization-with-Pointer-Generator-Networks.html b/_site/site/2018/02/05/Get-To-The-Point-Summarization-with-Pointer-Generator-Networks.html new file mode 100644 index 00000000..c4500791 --- /dev/null +++ b/_site/site/2018/02/05/Get-To-The-Point-Summarization-with-Pointer-Generator-Networks.html @@ -0,0 +1,72 @@ +

Introduction

+ +
    +
  • +

    Sequence-to-Sequence models have made abstract summarization viable but they still suffer from issues like out of vocabulary words and repetitive sentences.

    +
  • +
  • +

    The paper proposes to overcome these limitations by using a hybrid Pointer-Generator network (to copy words from the source text) and a coverage vector that keeps track of content that has already been summarized so as to discourage repetition.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Code

    +
  • +
+ +

Model

+ +

Pointer Generator Network

+ +
    +
  • +

    It is a hybrid model between the Sequence-to-Sequence network and Pointer Network such that when generating a word, the model decides whether the word would be generated using the softmax vocabulary (Sequence-to-Sequence) or using the source vocabulary (Pointer Network).

    +
  • +
  • +

    Since the model can choose a word from the source vocabulary, the issue of out of vocabulary words is handled.

    +
  • +
+ +

Coverage Mechanism

+ +
    +
  • +

    The model maintains a coverage vector which is the sum of attention distributions over all previous decoder timesteps.

    +
  • +
  • +

    This coverage vector is fed as an input to the attention mechanism.

    +
  • +
  • +

    A coverage loss is added to prevent the model from repeatedly attending to the same word.

    +
  • +
  • +

    The idea is to capture how much coverage different words have already received from the attention mechanism.

    +
  • +
+ +

Observation

+ +
    +
  • +

    Model when evaluated on CNN/Daily Mail summarization task, outperforms the state-of-the-art by at least 2 ROUGE points though it still does not outperform the lead-3 baseline.

    +
  • +
  • +

    Lead-3 baseline uses first 3 sentences as the summary of the article which should be a strong baseline given that the dataset is actually about news articles.

    +
  • +
  • +

    The model is initially trained without coverage and then finetuned with the coverage loss.

    +
  • +
  • +

    During training, the model first learns how to copy words and then how to generate words (pgen starts from 0.3 and converges to 0.53).

    +
  • +
  • +

    During testing, the model strongly prefers copying over generating (pgen = 0.17).

    +
  • +
  • +

    Further, whenever the model is at beginning of sentences or at the join between switched-together fragments, it prefers to generate a word instead of copying one from the source language.

    +
  • +
  • +

    The overall model is very simple, neat and interpretable and also performs well in practice.

    +
  • +
diff --git a/_site/site/2018/02/11/Stylistic-Transfer-in-Natural-Language-Generation-Systems-Using-Recurrent-Neural-Networks.html b/_site/site/2018/02/11/Stylistic-Transfer-in-Natural-Language-Generation-Systems-Using-Recurrent-Neural-Networks.html new file mode 100644 index 00000000..8676492f --- /dev/null +++ b/_site/site/2018/02/11/Stylistic-Transfer-in-Natural-Language-Generation-Systems-Using-Recurrent-Neural-Networks.html @@ -0,0 +1,48 @@ +

Introduction

+ +
    +
  • This workshop paper explores the problem of style transfer in natural language generation (NLG).
  • +
  • One possible manifestation would be rewriting technical articles in an easy-to-understate manner.
  • +
+ +

Challenges

+ +
    +
  • Identifying relevant stylistic cues and using them to control text generation in NLG systems.
  • +
  • Absence of a large amount of training data.
  • +
+ +

Pitch

+ +
    +
  • Using Recurrent Neural Networks (RNNs) to disentangle the style from semantic content.
  • +
  • Autoencoder model with two components - one for learning style and another for learning content.
  • +
  • This allows for “style” component to be replaced while keeping the “content” component same, resulting in a style transfer.
  • +
  • One way to think about this is - the encoder generates a 100-dimensional vector. In this, the first 50 entries, correspond to the “style” component and remaining to the “content” component.
  • +
  • The proposal is that the loss function should be modified to include a cross-covariance term for ensuring disentanglement.
  • +
  • I think one way of doing this is to have two loss functions: +
      +
    • The first loss function ensures that the input sentence is decoded properly into the target sentence. This loss is computed for each sentence.
    • +
    • The second loss ensures that the first 50 entries across all the encoded represenations are are correlated. This loss operates at the batch level.
    • +
    • The total loss is the weighted sum of these 2 losses.
    • +
    +
  • +
+ +

Possible Datasets

+ + + +

Possible Metrics

+ +
    +
  • Soundness - is the generated text entailed with the input sentence.
  • +
  • Coherence - free of grammatical errors, proper word usage etc.
  • +
  • Effectiveness - how effective was the style transfer
  • +
  • Since some of the metrics are subjective, human evaluators also need to be employed.
  • +
diff --git a/_site/site/2018/02/17/Neural-Relational-Inference-for-Interacting-Systems.html b/_site/site/2018/02/17/Neural-Relational-Inference-for-Interacting-Systems.html new file mode 100644 index 00000000..cccd236d --- /dev/null +++ b/_site/site/2018/02/17/Neural-Relational-Inference-for-Interacting-Systems.html @@ -0,0 +1,97 @@ +

Introduction

+ +
    +
  • +

    The paper presents Neural Relational Inference (NRI) model which can infer underlying interactions in a dynamical system in an unsupervised manner, using just the observational data in terms of the trajectories.

    +
  • +
  • +

    For instance, consider a simulated system where the particles are connected to each other by springs. The observational data does not explicitly specify which particles are connected to each other and only contains information like position and velocity of each particle at different timesteps.

    +
  • +
  • +

    The task is to explicitly infer the interaction structure (in this example, which pair of particles are connected to each other) while learning the dynamical model of the system itself.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the implementation

    +
  • +
+ +

Model

+ +
    +
  • +

    The model consists of an encoder that encodes the given trajectories into an interaction graph and a decoder that decodes the dynamical model given the interaction graph.

    +
  • +
  • +

    The model starts by assuming that a full connected interaction graph exists between the objects in the system.

    +
  • +
  • +

    For this latent graph z, zi, j denotes the (discrete) edge type between object vi and vj with the assumption that there are K edge types.

    +
  • +
  • +

    The object vi has a feature vector xit associated with it at time t. This feature vector captures information like location and velocity.

    +
  • +
+ +

Encoder

+ +
    +
  • +

    A Graph Neural Network (GNN) acts on the fully connected latent graph z, performs message passing from node to node via edges and predicts the discrete label for each edge.

    +
  • +
  • +

    The GNN architecture may itself use MLPs or ConvNets and returns a factorised distribution over the edge types qφ(z|x).

    +
  • +
+ +

Decoder

+ +
    +
  • +

    The decoder is another GNN (with separate params for each edge type) that predicts the future dynamics of the system and returns pθ(x|z).

    +
  • +
  • The overall model is a VAE that optimizes the ELBO given as:
  • +
  • +

    Eqφ(z|x)[log pθ(x|z)] − KL[qφ(z|x)||pθ(z)]

    +
  • +
  • +

    pθ(x) is the prior which is assumed to be uniform distribution over the edge types.

    +
  • +
  • +

    Instead of predicting the dynamics of the system for just the next timestep, the paper chooses to use the prediction multiple steps (10) in the future. This ensures that the interactions can have a significant effect on the dynamics of the system.

    +
  • +
  • In some cases, like real humans playing a physical sport, the dynamics of the system need not be Markovian and a recurrent decoder is used to model the time dependence.
  • +
+ +

Pipeline

+ +
    +
  • +

    Given the dynamical system, run the encoder to obtain qφ(z|x).

    +
  • +
  • +

    Sample zi, j from qφ(z|x).

    +
  • +
  • +

    Run the decoder to predict the future dynamics for the next T timesteps.

    +
  • +
  • +

    Optimise the ELBO loss.

    +
  • +
  • +

    Note that since the latent variables (edge labels) are discrete in this case, the sampling is done from a continuous approximation of the discrete distribution and reparameterization trick is applied over this discrete approximation to get the (biased) gradients.

    +
  • +
+ +

Observations

+ +
    +
  • +

    Experiments are performed using simulated systems like particles connected to springs, phase coupled oscillators and charged particles and using real-world data like CMU Motion Capture database and NBA tracking data.

    +
  • +
  • +

    The NRI system effectively predicts the dynamics of the systems and is able to reconstruct the ground truth interaction graph (for simulated systems).

    +
  • +
diff --git a/_site/site/2018/02/24/Learning-a-SAT-Solver-from-Single-Bit-Supervision.html b/_site/site/2018/02/24/Learning-a-SAT-Solver-from-Single-Bit-Supervision.html new file mode 100644 index 00000000..bee8f3c8 --- /dev/null +++ b/_site/site/2018/02/24/Learning-a-SAT-Solver-from-Single-Bit-Supervision.html @@ -0,0 +1,95 @@ +

Introduction

+ +
    +
  • +

    The paper presents NeuroSAT, a message passing neural network that is trained to predict if a given SAT can be solved. As a side effect of training, the model also learns how to solve the SAT problem itself without any extra supervision.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Background

+ +
    +
  • +

    Given an expression in the propositional logic, the task is to predict if there exists a substitution of variables that make the expression true.

    +
  • +
  • +

    The expression itself can be written as a conjunction of disjunctions (“and” over “or”) where each conjunct is called a clause and each variable within a clause is called a literal.

    +
  • +
  • +

    Invariants

    + +
      +
    • +

      The variables or clauses or literals (within the clauses) can be permuted.

      +
    • +
    • +

      Every occurrence of a variable can be negated.

      +
    • +
    +
  • +
+ +

Model

+ +
    +
  • +

    Given the SAT problem, create an undirected graph of literals, their negations and the clauses they belong to.

    +
  • +
  • +

    Put an edge between every literal and the clause to which it belongs and another kind of edge between every literal and its negation.

    +
  • +
  • +

    Perform message passing between nodes to obtain vector representations corresponding to each node. Specifically, first, each clause received a message from its neighbours (literals) and updates its embeddings. Then every literal receives a message from its neighbours (both literals and clauses) and updates its embeddings.

    +
  • +
  • +

    After T iterations, the nodes vote to decide the prediction of the model as a whole.

    +
  • +
  • +

    The model is trained end-to-end using the cross-entropy loss between logit and the true label.

    +
  • +
  • +

    Permutation invariance is ensured by operating on the nodes and the edges in the topological order and negation invariance is ensured by treating all literals as the same.

    +
  • +
+ +

Decoding Satisfying Assignment

+ +
    +
  • +

    The most interesting aspect of this work is that even though the model was trained to predict if the SAT problem can be satisfied, it is actually possible to extract the correct assignment from the classifier.

    +
  • +
  • +

    In the early iterations, all the nodes vote “unsolvable” with low confidence. Then a few nodes start voting “solvable” and then a phase transition happens where most of the nodes start voting “solvable” with high confidence.

    +
  • +
  • +

    The model never becomes highly confident that problem is “unsolvable” and almost never guesses “solvable” on an “unsolvable” problem. So in some sense, the model is looking for the combination of literals that actually solves the problem.

    +
  • +
  • +

    The authors found that the 2 dimensional PCA projections of the literal embeddings are initially mixed up but become more and more linearly separable as the phase transition happens.

    +
  • +
  • +

    Based on this insight, the authors propose to obtain cluster centres C1 and C2, partition the variables according to the cluster centres and then try assignments from both the partitions.

    +
  • +
  • +

    This alone provides a satisfying solution in over 70% of the cases when though there is no explicit supervising signal about how to solve the problem.

    +
  • +
  • +

    The other strengths of the paper includes

    + +
      +
    • +

      Generalizing to longer and more difficult SAT problems (than those seen during training).

      +
    • +
    • +

      Generalizing to another kind of search problems like graph colouring, clique detection etc (over small random graphs).

      +
    • +
    +
  • +
  • +

    The paper also reports that by adding supervising signal about which clauses in the given expression are unsatisfiable, it is possible to decode the literals which prove the “unsatisfiability” of an expression at test time. Though not a lot of details have been provided about this part and would probably be covered in the next iteration of the paper.

    +
  • +
+ diff --git a/_site/site/2018/03/05/An-Empirical-Investigation-of-Catastrophic-Forgetting-in-Gradient-Based-Neural-Networks.html b/_site/site/2018/03/05/An-Empirical-Investigation-of-Catastrophic-Forgetting-in-Gradient-Based-Neural-Networks.html new file mode 100644 index 00000000..57c40a98 --- /dev/null +++ b/_site/site/2018/03/05/An-Empirical-Investigation-of-Catastrophic-Forgetting-in-Gradient-Based-Neural-Networks.html @@ -0,0 +1,80 @@ +

Introduction

+ +
    +
  • +

    Catastrophic Forgetting refers to the phenomenon where when a learning system is trained on two tasks in succession, it may forget how to perform the first task.

    +
  • +
  • +

    The paper investigates this behaviour for different learning activations in presence and absence of dropout.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the implementation

    +
  • +
+ +

Experiment Formulation

+ +
    +
  • +

    For each experiment, two tasks are defined - “old” task and “new” task.

    +
  • +
  • +

    The network is first trained on the “old” task until the validation set error has not improved for the last 100 epochs.

    +
  • +
  • +

    The “best” performing model is then trained for the “new” task until the combined error on the “old” and the “new” validation datasets has not improved in the last 100 epochs.

    +
  • +
  • +

    All the tasks used the same model architecture - 2 hidden layers followed by a softmax layer.

    +
  • +
  • Following activations were tested: +
      +
    • Sigmoid
    • +
    • ReLU
    • +
    • Hard Local Winner Takes It All
    • +
    • Maxout
    • +
    +
  • +
  • +

    Models were trained using SGD with or without dropout.

    +
  • +
  • +

    For each combination of the model, activation and the training mechanism, a random hyper param search was performed with set of 25 hyperparams.

    +
  • +
  • The authors took care to keep the hyperparams and other settings consistent and comparable across different experiments. Deviations, wherever applicable, and their reasons were documented.
  • +
+ +

Observations

+ +
    +
  • +

    In terms of the relationship between the “old” and the “new” tasks, three kinds of settings are considered:

    + +
      +
    • +

      The tasks are very very similar but the input is processed in a different format. For this setting, MNIST dataset was used with a different permutation of pixels for the “old” and the “new” task.

      +
    • +
    • +

      The tasks are similar but not exactly the same. For this setting, the task was to predict sentiments of reviews across 2 different product categories.

      +
    • +
    • +

      In the last setting, 2 dissimilar tasks were used. One task was to predict sentiment of reviews and another task was to perform classification over MNIST dataset (reduced to 2 classes).

      +
    • +
    +
  • +
  • +

    Using Dropout improved the overall validation performance for all the models for all the tasks.

    +
  • +
  • +

    Using Dropout also increase the size of the optimal model across all the activations indicating that maybe the increased size of the model could explain the increased resistance to forgetting. It would have been interesting to check if dropout always selected the largest model possible given the set of the hyperparams.

    +
  • +
  • +

    On the dissimilar task, dropout improved the performance while reducing the model size so it might have other properties as well that helps to prevent forgetting.

    +
  • +
  • +

    As compared to the choice of training technique, the activation function has a less consistent effect on resistance to forgetting. The paper recommends performing cross-validation for the choice of the activation function. If that is not feasible, maxout activation function with dropout could be used.

    +
  • +
diff --git a/_site/site/2018/03/11/Improving-Information-Extraction-by-Acquiring-External-Evidence-with-Reinforcement-Learning.html b/_site/site/2018/03/11/Improving-Information-Extraction-by-Acquiring-External-Evidence-with-Reinforcement-Learning.html new file mode 100644 index 00000000..7633f1cb --- /dev/null +++ b/_site/site/2018/03/11/Improving-Information-Extraction-by-Acquiring-External-Evidence-with-Reinforcement-Learning.html @@ -0,0 +1,120 @@ +

Introduction

+ +
    +
  • +

    Information Extraction - Given a query to be answered and an external search engine, information extraction entails the task of issuing search queries, extracting information from new sources and reconciling the extracted values till we are sufficiently confident about the extracted values.

    +
  • +
  • +

    The paper proposes the use of Reinforcement Learning (RL) to solve this task.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Implementation

    +
  • +
+ +

Key Aspect

+ +
    +
  • Use of Reinforcement Learning to resolve the ambiguity inherent in the textual documents.
  • +
  • Given a query, the RL agent would use template statement to formulate the queries (to be performed on the black box search engine). It would further resolve and combine the result for the query from the set of retrieved documents.
  • +
+ +

Datasets

+ +
    +
  • Database of Mass Shootings in the United States.
  • +
  • Food Shield database of illegal food adulteration.
  • +
+ +

Framework

+ +
    +
  • +

    Information extraction task is modelled as a Markov Decision Process (MDP) <S, A, T, R>

    +
  • +
  • S - Set of all possible states +
      +
    • The state consists of: +
        +
      • Extractor’s confidence in predicted entity values.
      • +
      • Context from which values are extracted.
      • +
      • Similarity between the new document (extracted just now from the search engine) and the original document accompanying the given query.
      • +
      +
    • +
    +
  • +
  • A - Set of all possible actions +
      +
    • Reconciliation decision - d +
        +
      • Accept all entities values.
      • +
      • Reject all entities values.
      • +
      • Stop the current episode.
      • +
      +
    • +
    • Query choice - q +
        +
      • Choose the next query from a set of automatically generated alternatives.
      • +
      +
    • +
    +
  • +
  • R - Rewards +
      +
    • Maximise the final extraction accuracy while minimising the number of queries.
    • +
    +
  • +
  • Q - Queries +
      +
    • Generated using a template.
    • +
    • The query is searched on a search engine and the top k links are retrieved.
    • +
    +
  • +
  • Transition +
      +
    • Start with a single source article xi and extract the initial set of entities.
    • +
    • At each timestep, the agent is given the state (s) on basis of which it chooses the action (d, q). The episode stops whenever the action is a stop action.
    • +
    +
  • +
  • +

    Deep Q Network is used.

    +
  • +
  • Parameters are learned using SGD and RMSProp.
  • +
+ +

Experimental Setup

+ +

Extraction Model

+ +
    +
  • Max Entropy Classifier is used as the base extraction system.
  • +
  • First, all the words in the document are tagged as one of the entity types and the mode of these values is used to obtain the set of extracted entities.
  • +
+ +

Baseline

+ +
    +
  • Basic Extractors
  • +
  • Aggregation System which either chooses the entity value with the highest confidence or takes a majority vote over all extracted values.
  • +
  • Meta-Classifier which operates over the same input state space and produces the same set of reconciliation decisions as the DQN.
  • +
  • Oracle Extractor which is computed assuming perfect reconciliation and query decisions on the top of the Maxnet base extractor.
  • +
+ +

RL Models

+ +
    +
  • RL Basic - Only reconciliation decision.
  • +
  • RL Query - Only query decision with a fixed reconciliation strategy.
  • +
  • RL Extract - the full system with both reconciliation and query decision.
  • +
+ +

Result

+ +
    +
  • RL Extract obtains substantial gains eg up to 11% over Maxnet.
  • +
  • Simple aggregation schemes do not handle the task well.
  • +
  • In terms of reward structure, providing rewards after each step works better than a single delayed reward.
  • +
diff --git a/_site/site/2018/03/18/Cyclical-Learning-Rates-for-Training-Neural-Networks.html b/_site/site/2018/03/18/Cyclical-Learning-Rates-for-Training-Neural-Networks.html new file mode 100644 index 00000000..038f2ef3 --- /dev/null +++ b/_site/site/2018/03/18/Cyclical-Learning-Rates-for-Training-Neural-Networks.html @@ -0,0 +1,66 @@ +

Introduction

+ +
    +
  • +

    Conventional wisdom says that when training neural networks, learning rate should monotonically decrease. This insight forms the basis of the different type of adaptive learning rates.

    +
  • +
  • +

    Counter to this expected behaviour, the paper demonstrates that using a cyclical learning rate (CLR), varying between a minimum and a maximum value, helps to train the neural network faster without requiring fine-tuning of learning rate.

    +
  • +
  • +

    The paper also provides a simple approach to estimate the lower and upper bound for CLR.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the implementation

    +
  • +
+ +

Intution

+ +
    +
  • +

    Difficulty in minimizing the loss arises from saddle points and not from local minima. [Ref]

    +
  • +
  • +

    Increasing the learning rate allows for rapid traversal of saddle points.

    +
  • +
  • +

    Alternatively, the optimal learning rate is expected to be between bounds of CLR and thus the learning rate would always be close to the optimal learning rate.

    +
  • +
+ +

Parameter Estimation

+ +
    +
  • +

    Cycle Length = Number of iterations till learning rate returns to the initial value = 2 * step_size

    +
  • +
  • +

    step_size should be set to 2-10 times the number of iterations in an epoch.

    +
  • +
  • +

    Estimating the CLR boundary values:

    + +
      +
    • +

      Run the model for several epochs while increasing the learning rate between the allowed low and high values.

      +
    • +
    • +

      Plot accuracy vs learning rate and note the learning rate values when the accuracy starts to fall.

      +
    • +
    • +

      This gives a good candidate value for upper and lower bound. Alternatively, the lower bound could be set to be 1/3 or 3/4 of the upper bound. But it is difficult to judge if the model has run for the sufficient number of epochs in the first place.

      +
    • +
    +
  • +
+ +

Notes

+ +
    +
  • The idea in itself is very simple and straight-forward to add to any existing model which makes it very appealing.
  • +
  • The author has experimented with various architectures and datasets (from vision domain) and has reported faster training results.
  • +
diff --git a/_site/site/2018/03/25/The-Lottery-Ticket-Hypothesis-Training-Pruned-Neural-Networks.html b/_site/site/2018/03/25/The-Lottery-Ticket-Hypothesis-Training-Pruned-Neural-Networks.html new file mode 100644 index 00000000..93c921fe --- /dev/null +++ b/_site/site/2018/03/25/The-Lottery-Ticket-Hypothesis-Training-Pruned-Neural-Networks.html @@ -0,0 +1,72 @@ +

Introduction

+ +
    +
  • +

    Empirical evidence indicates that at training time, the neural networks need to be of significantly larger size than necessary.

    +
  • +
  • +

    The paper purposes a hypothesis called the lottery ticket hypothesis to explain this behaviour.

    +
  • +
  • +

    The idea is the following - Successful training of a neural network depends on a lucky random initialization of a subcomponent of the network. Such components are referred to as lottery tickets.

    +
  • +
  • +

    Larger networks are more likely to have these lottery tickets and hence are easier to train.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Methodology

+ +
    +
  • +

    Various aspects of the hypothesis are explored empirically.

    +
  • +
  • +

    Two tasks are considered - MNIST and XOR.

    +
  • +
  • +

    For each task, the paper considers networks of different sizes and empirically shows that larger networks are more likely to converge (or have better performance) for a fixed number of epochs as compared to the smaller networks.

    +
  • +
  • +

    Given a large, trained network, some weights (or units) of the network are pruned and the resulting network is reset to its initial random weights.

    +
  • +
  • +

    The resulting network is the lottery-ticket in the sense that when the pruned network is trained, it is more likely to converge than an otherwise randomly initialised network of the same size. Further, it is more likely to match the original, larger network in terms of performance.

    +
  • +
  • +

    The paper explores different aspects of this experiment:

    + +
      +
    • Pruning Strategies: +
        +
      • One-shot strategy prunes the network in one-go while the iterative strategy prunes the network iteratively.
      • +
      • Though the latter is computationally more intensive, it is more likely to find a lottery ticket.
      • +
      +
    • +
    • +

      Size of the pruned network affects the speed of convergence when training the lottery ticket.

      +
    • +
    • +

      If only the architecture or only the initial weights of the lottery ticket are used, the resulting network tends to converge more slowly and achieves a lower level of performance.

      +
    • +
    • This indicates that the lottery ticket depends on both the network architecture and the weight initialization.
    • +
    +
  • +
+ +

Discussion

+ +
    +
  • +

    The paper includes some more interesting experiments. For instance, the distribution of the initialization in the weights that survived the pruning suggests that small weights from before training tend to remain small after training.

    +
  • +
  • +

    One interesting experiment would be to show the performance of the pruned network before resetting its weights and retraining again. This performance should be compared with the performance of the initial large network and the performance of the lottery ticket after training.

    +
  • +
  • +

    Overall, the experiments are not sufficient to conclude anything about the correctness of the hypothesis. The proposition itself is very interesting and could enhance our understanding of how the neural networks work.

    +
  • +
diff --git a/_site/site/2018/04/02/Unsupervised-Learning-By-Predicting-Noise.html b/_site/site/2018/04/02/Unsupervised-Learning-By-Predicting-Noise.html new file mode 100644 index 00000000..1bd6354d --- /dev/null +++ b/_site/site/2018/04/02/Unsupervised-Learning-By-Predicting-Noise.html @@ -0,0 +1,147 @@ +

Introduction

+ +
    +
  • +

    Convolutional Neural Networks are extremely good feature extractors in the sense that features extracted for one task (say image classification) can be easily transferred to another task (say image segmentation).

    +
  • +
  • +

    Existing unsupervised approaches do not aim to learn discriminative features and supervised approaches for discriminative features do not scale well.

    +
  • +
  • +

    The paper presents an approach to learn features in an unsupervised setting by using a set of target representations called as Noise As Target (NAT) which acts as a kind of proxy supervising signal.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +

Unsupervised Setting

+ +
    +
  • Given a collection of image X (x1, x2, …, xn), we want to learn a parameterized mapping f such that f(xi) gives the features of image xi. We would jointly learn the target vectors yi (more on it later).
  • +
+ +

Loss Function

+ +
    +
  • Squared L2 norm is used as the distance measure while making sure that final activations are unit normalized.
  • +
+ +

Fixed Target Representation

+ +
    +
  • +

    In the setting of the problem where we are learning both the features and the target representation, a trivial solution would be the one where all the input images map to the same target and are assigned the same representation. No discriminative features are learned in this case.

    +
  • +
  • +

    To avoid such situations, a set of k predefined target representations are chosen and each image is mapped to one of these k representations (based on the features).

    +
  • +
  • +

    There is an assumption that k > n so that each image is assigned a different target.

    +
  • +
  • +

    One simple choice of target representation is the standard one-hot vector which implies that all the class (and by extension, the associated images) are orthogonal and equidistant from each other. But this is not a reasonable approximation as not all the image pairs are equally similar or dissimilar.

    +
  • +
  • +

    Instead, the target vectors are uniformly sampled from a d-dimensional unit sphere, where d is the dimensionality of the feature representation. That is, the idea is to map the features to the manifold of the d-dimensional L2 sphere by using the K predefined representations as for the discrete approximation of the manifold.

    +
  • +
  • +

    Since each data point (image) is mapped to a new point on the manifold, the algorithm is suited for online training as well.

    +
  • +
+ +

Optimisation

+ +
    +
  • +

    For the training, the number of target K is reduced to the number of images n and an assignment matrix P is learned which ensures that the mapping between the image to target is 1-to-1.

    +
  • +
  • +

    The resulting optimisation equation can be solved using the Hungarian Algorithm but at a high-cost O(n^3). An optimisation is to take a batch of b images and update the square matrix PB for dimension bXb (made of the images and their corresponding targets). This reduces the overall complexity of O(nb^2).

    +
  • +
  • +

    Other optimisation techniques, that are common to supervised learning, like batch norm used in this setting as well.

    +
  • +
+ +

Implementation Detail

+ +
    +
  • +

    Used AlexNet with NATs to train the unsupervised model.

    +
  • +
  • +

    An MLP is trained on these features to learn the classifier.

    +
  • +
  • +

    Standard preprocessing techniques like random cropping/flipping are used.

    +
  • +
+ +

Experimental Details

+ +
    +
  • +

    Dataset

    + +
      +
    • +

      ImageNet for training the AlexNet architecture with the proposed approach.

      +
    • +
    • +

      Pascal VOC 2007 for transfer learning experiments.

      +
    • +
    +
  • +
  • +

    Baselines

    + +
      +
    • +

      Unsupervised approaches like autoencoder, GAN, BiGAN

      +
    • +
    • +

      Self-supervised

      +
    • +
    • +

      SOTA models using hand-made features SIFT with Fisher Vector.

      +
    • +
    +
  • +
+ +

Observation

+ +
    +
  • +

    Using squared loss instead of softmax does not deteriorate the performance too much.

    +
  • +
  • +

    The authors compare the effect of using discrete vs continuous target representations for transfer learning. For the discrete representation, elements of the canonical basis of a k-dimensional space (k=1000, 10000, 100000) are used. Experiments demonstrate that d-dimensional continuous vectors perform much better than the discrete vectors.

    +
  • +
  • +

    While training the unsupervised network, its features were extracted after every 20 iterations to evaluate the performance on transfer learning task. The test accuracy increases up to around 100 iterations then saturate.

    +
  • +
  • +

    Comparing the visualization of the first convolutional layer filters (for AlexNet with and without supervision) shows that while unsupervised filters are less sharp, they maintain the edge and orientation information.

    +
  • +
  • +

    The proposed unsupervised method outperforms all the unsupervised baselines and is competitive with respect to the supervised baseline. But it is still far behind the model using handcrafted features.

    +
  • +
  • +

    For transfer learning, on Pascal VOC, the proposed approach beats the supervised baseline and works at par with the supervised approach.

    +
  • +
+ +

Notes

+ +
    +
  • +

    The paper proposed a simple unsupervised framework for learning discriminative features without having to rely on proxy tasks like image generation and without having to make an assumption about the input domain.

    +
  • +
  • +

    The key aspect of the proposed approach is that each image is assigned to a unique point in the d-dimensional manifold which means 2 images could be very close to each other on the manifold while being quite distinct in reality. It is interesting to see that such a simple strategy is able to give such good results.

    +
  • +
diff --git a/_site/site/2018/04/08/Neural-Message-Passing-for-Quantum-Chemistry.html b/_site/site/2018/04/08/Neural-Message-Passing-for-Quantum-Chemistry.html new file mode 100644 index 00000000..cf60f22d --- /dev/null +++ b/_site/site/2018/04/08/Neural-Message-Passing-for-Quantum-Chemistry.html @@ -0,0 +1,162 @@ +

Introduction

+ +
    +
  • +

    The paper presents a general message passing architecture called as Message Passing Neural Networks (MPNNs) that unify various existing models for performing supervised learning on molecules.

    +
  • +
  • +

    Variants of the MPNN model achieve very good performance on the task of predicting the property of the molecules.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

MPNN

+ +

Setting

+ +
    +
  • +

    The input to the model is an undirected graph G where node features are represented as xv (corresponding to node v) and edge features are ev, w (corresponding to edge between nodes v, w).

    +
  • +
  • +

    The idea is to learn a representation (or feature vector) for all the nodes (and possibly edges) in the graph and use that for the downstream supervised learning task.

    +
  • +
  • +

    The model can be easily extended to the setting of directed graphs.

    +
  • +
  • +

    The model works in 2 phases:

    +
  • +
+ +

Message Passing Phase

+ +
    +
  • +

    All nodes send a message to their neighbouring nodes. The message is a function of the feature vectors corresponding to the sender node (or vertex), the receiver node and the edge connecting the two nodes. The feature vectors can be combined to form the message using the message function which can be implemented as a neural network.

    +
  • +
  • +

    Once a node has received messages from all its neighbours, it updated its feature vector by aggregating all the message. The function used to aggregate and update the feature vector is called as the update function and can be implemented as a neural network.

    +
  • +
  • +

    After updating the feature vectors, the graph could initiate another round of message passing. After a sufficient number of message passing rounds, the Readout phase is invoked.

    +
  • +
+ +

Readout Phase

+ +
    +
  • +

    The feature vectors corresponding to different nodes in the graph are aggregated into a single feature vector (corresponding to the feature vector of the graph) using the readout function.

    +
  • +
  • +

    The readout function can also be implemented using a neural network with the condition that it is invariant to the permutation of the nodes within the graph (to ensure that the MPNN is independent of the graph isomorphism).

    +
  • +
+ +

Existing Variants in literature

+ + + +

Experiments

+ +

Setup

+ +
    +
  • +

    Broadly speaking, the task is to predict the properties of given molecules (regression problem).

    +
  • +
  • +

    The QM9 dataset consists of 130K molecules whose properties have been measured using Quantum Mechanical Simulations (DFT).

    +
  • +
  • +

    Properties to be predicted include atomization energy, enthalpy, highest fundamental vibrational frequency etc.

    +
  • +
  • +

    There are two benchmarks for error:

    + +
      +
    • +

      DFT Error - Estimated average error of DFT approximation

      +
    • +
    • +

      Chemical Accuracy - As established by the chemistry community

      +
    • +
    +
  • +
+ +

Model

+ +
    +
  • +

    Following variants of message function are explored:

    + +
      +
    • +

      Matrix multiplication between Aevw and hv where A is the adjacency matrix hv is the feature corresponding to node v.

      +
    • +
    • +

      Edge Network which is same as matrix multiplication case with the difference that A is a learned matrix for each edge type.

      +
    • +
    • +

      Pair Network where the feature vector corresponding to the source node, target node and edge is fed to a neural network.

      +
    • +
    +
  • +
+ +

Virtual Elements

+ +
    +
  • +

    Since all messages are shared via edges, it could take a long time for the message to move between two ends of the graph. To fasten this process, virtual elements are provided.

    +
  • +
  • +

    In the first setting, “virtual edges” are inserted between nodes.

    +
  • +
  • +

    In the second setting, a “master” node connects to all the other nodes.

    +
  • +
+ +

Message Passing Complexity

+ +
    +
  • +

    In a graph with n nodes and d dimensional feature vectors, a single step of message passing would have the worst case time complexity of O(n2d2.

    +
  • +
  • +

    This complexity can be reduced by breaking the d dimensional embedding into k different groups of d/k embeddings which can be updated in parallel. The complexity of the modified approach is O(n2d2/k.

    +
  • +
+ +

Results

+ +
    +
  • +

    Best performing MPNN model uses edge network as the message function and set2set as the readout function.

    +
  • +
  • +

    Using group of embeddings helps to improve generalization. This effect could also be because of ensemble-like nature of the modified architecture.

    +
  • +
  • +

    The model performs worse without the virtual elements.

    +
  • +
+ +

Takeaways

+ +
    +
  • +

    Long range interaction between vertices is necessary.

    +
  • +
  • +

    Scaling to larger molecule sizes is challenging because the model creates a fully connected graph by incorporating virtual elements.

    +
  • +
diff --git a/_site/site/2018/05/06/Learning-to-Count-Objects-in-Natural-Images-for-Visual-Question-Answering.html b/_site/site/2018/05/06/Learning-to-Count-Objects-in-Natural-Images-for-Visual-Question-Answering.html new file mode 100644 index 00000000..6d01ced7 --- /dev/null +++ b/_site/site/2018/05/06/Learning-to-Count-Objects-in-Natural-Images-for-Visual-Question-Answering.html @@ -0,0 +1,74 @@ +

Introduction

+ +
    +
  • Most of the visual question-answering (VQA) models perform poorly on the task of counting objects in an image. The main reasons are: +
      +
    • Most VQA models use a soft attention mechanism to perform a weighted sum over the spatial features to obtain a single feature vector. These aggregated features helps in most category of questions but seems to hurt for counting based questions.
    • +
    • For the counting questions, we do not have a ground truth segmentation of where the objects to be counted are present on the image. This limits the scope of supervision.
    • +
    +
  • +
  • +

    Additionally, we need to ensure that any modification in the architecture, to enhance the performance on the counting questions, should not degrade the performance on other classes of questions.

    +
  • +
  • +

    The paper proposes to overcome these challenges by using the attention maps (and not the aggregated feature vectors) as input to a separate count module.

    +
  • +
  • Link to the paper
  • +
+ +

Notes

+ +

The basic idea is quite intuitive: when we perform weighted averaging based on different attention maps, we end up averaging the features corresponding to the difference instances of an object. This makes the feature vectors indistinguishable from the scenario where we had just one instance of the object in the image.

+ +

Even multiple glimpses (multiple attention steps) can not resolve this problem as the weights given to one feature vector would not depend on the other feature vectors (that are attended to). Hard attention could be more useful than soft-attention but there is not much empirical evidence in support of this hypothesis.

+ +

The proposed count module is a separate pipeline that can be integrated with most of the existing attention based VQA models without affecting the performance on non-count based questions.

+ +

The inputs to the count module are the attention maps and the object proposals (coming from some pre-trained model like the RCNN model) and the output is an count-feature vector which is used to answer the count based question.

+ +

The top level idea is the following - given the object proposals and the attention maps, create a graph where nodes are objects (object proposals) and edges capture how similar two object proposals are (how much do they overlap). The graph is transformed (by removing and scaling edges) so that the count of the object can be obtained easily.

+ +

To explain their methodology, the paper simplifies the setting by making two assumptions:

+
    +
  • The first assumption is that the attention weights are either 1 (when the object is present in the proposal) or 0 (when the object is absent from the proposal).
  • +
  • The second assumption is that any two object proposals either overlap completely (in which case, they are corresponding to the exact same object and hence receive the exact same weights) or the two proposals have zero overlap (in which case, they must be corresponding to completely different objects).
  • +
+ +

These simplifying assumptions are made only for the sake of exposition and do not limit the capabilities of the count module.

+ +

Given the assumptions, the task of the count module is to handle the exact duplicates to prevent double-counting of objects.

+ +

As the first step, the attention weights (a) are used to generate an attention matrix (A) by performing an outer product between a and aT. This corresponds to the step of creating a graph from the input.

+ +

A corresponds to the adjacency matrix of that graph. The attention weight for the ith proposal corresponds to the ith node in the graph and the edge between the nodes i and j has the weight ai*aj.

+ +

Also note that the graph is a weighted directed graph and the subgraph of vertices satisfying the condition ai = 1 is a complete directed graph with self-loops. Given such a graph, the number of vertices, V = sqrt(E) where E could be computed by summing over the adjacency matrix.This implies that if the proposals are distinct, then the count can be obtained trivially by performing a sum over the adjacency matrix.

+ +

The objective is now to eliminate the edges such that the underlying objects are the vertices of a complete subgraph. This requires removing two type of duplicate edges - intra-object edges and inter-object edges.

+ +

Intra-object edges can be removed by computing a distance matrix, D, defined as 1 - IoU, where IoU matrix corresponds to the Intersection-over-Union matrix. A modified adjacency matrix A’ is obtained by performing the element-wise product between f1(A) and f2(D) where f1 and f2 are piece-wise linear functions that are learnt via backpropogation.

+ +

The inter-object edges are removed in the following manner:

+ +
    +
  • Count the number of proposals that correspond of each instance of an object and then scale down the edges corresponding to the different instances by that number.
  • +
  • This creates the effect of reducing the weights of multiple proposals equivalent to a single proposal.
  • +
  • The number of proposals corresponding to an object is not available as an annotation in the training pipeline and is estimated based on the similarity between the different proposals (measured via the attention weights a, adjacency matrix A and distance matrix D).
  • +
  • The matrix corresponding to the similarity between proposals (simi, j) is transformed into a vector corresponding to the scaling factor of each node (si)
  • +
+ +

s can be converted into a matrix (by doing outer-product with itself) so as to scale both the incoming and the outgoing edges. The self edges (which were removed while computing A’ are added back (after scaling with s) to obtain a new transformed matrix C.

+ +

The transformed matrix C is a complete graph with self-loops where the nodes corresponds to all the relevant object instances and not to object proposals. The actual count can be obtained from C by performing a sum over all its values as described earlier. The original count problem was a regression problem but it is transformed into a classification problem to avoid scale issues. The network produces a k-hot n-dimensional vector called o where n is the number of object proposals that were feed into the module (and hence the upper limit on upto how large a number could the module count). In the ideal setting, k should be one, as the network would produce an integer value but in practice, the network produces a real number so k can be upto 2. If c is an exact integer, the output is a 1-hot vector with the value in index corresponding to c set to 1. If c is a real number, the output is a linear interpolation between two one-hot vectors (the one-hot vectors correspond to the two integers between which c lies).

+ +

count module supports computing the confidence of a prediction by defining two variables pa and pD which compute the average distance of f6(a) and $f7(D) from 0.5. The final output o’ is defined as f8(pa + pD) . o

+ +

All the different f functions are piece wise linear functions and are learnt via backpropagation.

+ +

Experiments

+ +

The authors created a new category of count-based questions by filtering the number-type questions to remove questions like “What is the time right now”. These questions do have a neumerical answer but do not fall under the purview of count based questions and hence are not targeted by the count model.

+ +

The authors augmented a state of the art VQA model with their count module and show substantial gains over the count-type questions for the VQA-v2 dataset. This augmentation does not drastically impact the performance on non-count questions.

+ +

The overall idea is quite crisp and intutive and the paper is easy to follow. It would be even better if there were some more abalation studies. For example, why are the piece-wise linear functions assumed to have 16 linear components? Would a smaller or larger number be better?

diff --git a/_site/site/2018/05/21/Net2Net-Accelerating-Learning-via-Knowledge-Transfer.html b/_site/site/2018/05/21/Net2Net-Accelerating-Learning-via-Knowledge-Transfer.html new file mode 100644 index 00000000..3c404a37 --- /dev/null +++ b/_site/site/2018/05/21/Net2Net-Accelerating-Learning-via-Knowledge-Transfer.html @@ -0,0 +1,69 @@ +

Notes

+ +
    +
  • +

    The paper presents a simple yet effective approach for transferring knowledge from a trained neural network (referred to as the teacher network) to a large, untrained neural network (referred to as the student network).

    +
  • +
  • +

    The key idea is to use a function-preserving transformation that guarantees that for any given input, the output from the teacher network and the newly created student network would be the same.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to an implementation

    +
  • +
  • +

    The approach works as follows - Let us say that the teacher network was represented by the transformation y = f(x, θ) where θ refer to the parameters of the network. The task is to choose a new set of parameters θ’ for the student network g(x, θ’) such that for all x, f(x, θ) = g(x, θ’)

    +
  • +
  • +

    To start, we can assume that f and g are composed of standard linear layers. Layer i and i+1 are represented by weights Wmxni and Wnxpi+1

    +
  • +
  • +

    We want to grow layer i to have q output units (where q > n) and layer i+1 to have q input units. The new weight matrix would be Umxqi and Uqxpi+1

    +
  • +
  • +

    The first q columns (rows) of Wi (Wi+1) would be copied as it is into Ui(Ui+1).

    +
  • +
  • +

    For filling the remaining n-q slots, columns (rows) would be sampled randomly from Wi (Wi+1).

    +
  • +
  • +

    Finally, each layer in Ui is scaled by dividing by the corresponding replication factor to ensure that the output value of function remains unchanged by the operation.

    +
  • +
  • +

    Since convolutions can be seen as multiplication by a double block circulant matrix, the approach can be readily extended for convolutional networks.

    +
  • +
  • +

    The benefits of using this approach are the following:

    + +
      +
    • The newly created student network performs at least as good as the teacher network.
    • +
    • Any changes to the network are guaranteed to be an improvement.
    • +
    • It is safe to optimize all the parameters in the network.
    • +
    +
  • +
  • +

    The variant discussed above is called the Net2WiderNet variant. There is another variant calledNet2DeeperNet that enables the network to grow in depth.

    +
  • +
  • +

    In that case, a new matrix, U, initialized as the identity matrix, is added to the network. Note that unlike the Net2WiderNet, this approach would not work with arbitrary activation function between the layers.

    +
  • +
+ +

Strengths

+ +
    +
  • +

    The model can accelerate the training of neural networks, especially during development cycle when the designers try out different models.

    +
  • +
  • +

    The approach could potentially be used in life-long learning systems where the model is trained over a stream of data and needs to grow over time.

    +
  • +
+ +

Limitations

+ +
    +
  • The function preserving transformations need to be worked out manually. Extra care needs to be taken when operations like concatenation or batch norm are present.
  • +
diff --git a/_site/site/2018/06/09/Born-Again-Neural-Networks.html b/_site/site/2018/06/09/Born-Again-Neural-Networks.html new file mode 100644 index 00000000..8cfc6968 --- /dev/null +++ b/_site/site/2018/06/09/Born-Again-Neural-Networks.html @@ -0,0 +1,123 @@ +

Introduction

+ +
    +
  • +

    The paper explores knowledge distillation (KD) from the perspective of transferring knowledge between 2 networks of identical capacity.

    +
  • +
  • +

    This is in contrast to much of the previous work in KD which has focused on transferring knowledge from a larger network to a smaller network.

    +
  • +
  • +

    The paper reports that these Born Again Networks (BANs) outperform their teachers by significant margins in many cases.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • The standard KD setting is as follows: +
      +
    • Start with an untrained network (or ensemble of networks) and train them for the given task. This network is referred to as the teacher network.
    • +
    • Now start with another untrained network (generally of smaller size than the teacher network) and train it using the output of the teacher network. This network is referred to as the student network.
    • +
    +
  • +
  • +

    The paper augments this setting with an extra cross-entropy loss between the output of the teacher and the student networks. The student tried to predict the correct answer while matching the output distribution of the teacher.

    +
  • +
  • +

    The resulting student network is referred to as BAN - Born Again Network.

    +
  • +
  • +

    The same approach can be used multiple times (with diminishing returns) where the kth generation student is initialized by knowledge transfer from (k-1)th generation student.

    +
  • +
  • The output of multiple generation BANs are combined via averaging to produce BANE (Born Again Network Ensemble).
  • +
+ +

Dark Knowledge

+ +
    +
  • +

    Hinton et al suggested that even when the output of the teacher network is incorrect, it contains useful information about the similarity between the output classes. This information is referred to as the “dark knowledge”.

    +
  • +
  • +

    The current paper observed that the gradient of the correct output dimension during distillation and normal supervised training resembles the original gradient up to a weight factor. This sample specific weight is defined by the value of the teacher’s max output.

    +
  • +
  • +

    This suggests distillation may be performing some kind of importance weighing. To explore this further, the paper considers 2 cases:

    + +
      +
    • +

      Confidence Weighted By Teacher Max (CWTM) - where each example in the student’s loss function is weighted by the confidence that the teacher has on the prediction for that sample. The student incurs a higher loss if the teacher was more confident about the example.

      +
    • +
    • +

      Dark Knowledge with Permuted Predictions (DKPP) - The non-argmax output of teacher’s predictive distribution are permuted thus destroying the information about which output classes are related.

      +
    • +
    +
  • +
  • +

    The key effect of these variations is that the covariance between the output classes is lost and classical knowledge distillation would not be sufficient to explain improvements (if any).

    +
  • +
+ +

Experiments

+ +

Image Data

+ +
    +
  • Datasets +
      +
    • CIFAR10
    • +
    • CIFAR100
    • +
    +
  • +
  • Baselines +
      +
    • ResNets
    • +
    • DenseNets
    • +
    +
  • +
  • BAN Variants +
      +
    • BAN-DenseNet and BAN-ResNet - Train a sequence of 2 or 3 BANs using DenseNets and ResNets. Different variants constrain BANs to be similar to their teacher or penalize l2-distance between student and teacher activations etc.
    • +
    • Two settings with CWTM and DKPP as explained earlier.
    • +
    • BAN-Resnet with DenseNet teacher and BAN-DenseNet with ResNet teacher
    • +
    +
  • +
+ +

Text Data

+ +
    +
  • Datasets: +
      +
    • PTB Dataset
    • +
    +
  • +
  • Baselines +
      +
    • CNN-LSTM model
    • +
    +
  • +
  • BAN Variant +
      +
    • LSTM
    • +
    +
  • +
+ +

Results

+ +
    +
  • BAN student models improved over their teachers in most of the configurations.
  • +
  • Training BANs across multiple generations leads to saturating improvements.
  • +
  • The student models exhibit improvements even in the control settings (CWTM and DKPP). +
      +
    • One reason could be that the permutation procedure did not remove the higher order moments of output distribution.
    • +
    • Improvements in the CWTM model suggests that the pre-trained models can be used to rebalance the training set by giving lesser weight for samples where the teacher’s output distribution is more spread.
    • +
    +
  • +
+ diff --git a/_site/site/2018/07/04/Memory-Based-Parameter-Adaption.html b/_site/site/2018/07/04/Memory-Based-Parameter-Adaption.html new file mode 100644 index 00000000..cd9ed18a --- /dev/null +++ b/_site/site/2018/07/04/Memory-Based-Parameter-Adaption.html @@ -0,0 +1,146 @@ +

Introduction

+ +
    +
  • +

    Standard Deep Learning networks are not suitable for continual learning setting as the change in the data distribution leads to catastrophic forgetting.

    +
  • +
  • +

    The paper proposes Memory-based Parameter Adaptation (MbPA), a technique that augments a standard neural network with an episodic memory (containing examples from the previous tasks).

    +
  • +
  • +

    This episodic memory allows for rapid acquisition of new knowledge (corresponding to the current task) while preserving performance on the previous tasks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Architecture

+ +
    +
  • +

    MbPA consists of 3 components:

    + +
      +
    • Embedding Network f
    • +
    • Memory M
    • +
    • Output network g
    • +
    +
  • +
  • +

    f and g are parametric components while M is a non-parametric component.

    +
  • +
  • +

    M is a dynamically sized dictionary where the key represents the output of the embedding network and the value represents the desired output for a given input (input to the model).

    +
  • +
  • +

    When a new training tuple (xj, yj) is fed as input to the model, a key-value pair (hj, vj) is added to the memory. hj = f(xj)

    +
  • +
  • +

    The memory has a fixed size and acts as a circular buffer. When it gets filled up, earlier examples are dropped.

    +
  • +
  • +

    When accessing the memory using a key hkey, the k-nearest neighbours (in terms of distance from the given key) are retrieved.

    +
  • +
+ +

Training Phase

+ +
    +
  • During the training phase, the memory is only used to store the input examples and does not interfere with the training procedure.
  • +
+ +

Testing Phase

+ +
    +
  • +

    During testing, the memory is used to adapt the parameters of the output network g while the embedding network f remains the same.

    +
  • +
  • +

    Given the input x, obtain the embedding corresponding to x and using that as the key, retrieve the k-nearest neighbours from the memory.

    +
  • +
  • +

    Each retrived neighbour is a tuple of the form (hk, vk, wk) where wk is propotional to the closeness between the input query and the key corresponding to the retrived example.

    +
  • +
  • +

    The collection of all the retrieved examples are referred to as the context C.

    +
  • +
  • +

    The parameters of the output network g are adapted from θ to θx where θx = θ + δM(x, θ)

    +
  • +
  • +

    δM(x, θ) is referred to as the contextual update of parameters of the output network.

    +
  • +
+ +

Interpretation of MbPA

+ +
    +
  • +

    MbPA can be interpreted as decreasing the weighted average of negative log likelihood over the retrieved neighbours in the context C.

    +
  • +
  • +

    The expression corresponding to δM(x, θ) can be obtained by performing gradient descent to minimise the max a posterior over the context C.

    +
  • +
  • +

    The a posterior expression can be written as a sum of two terms - one corresponding to a weighted likelihood of data in the context C and the other corresponding to a regularisation term to prevent overfitting the data.

    +
  • +
  • +

    This idea can be thought of as a generalisation of attention. Attention can be viewed as fitting a constant function over the neighbourhood of memories while MbPA fits a more general function which is parameterised by the output network of the given model. Refer appendix E in the paper for further details.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    MbPA aims to solve the fundamental problem of enabling the model to deal with changes in data distribution.

    +
  • +
  • +

    In that sense, it is evaluated on a wide range of settings: continual learning, incremental learning, unbalanced datasets and change in data distribution at test time.

    +
  • +
  • +

    Continual Learning:

    + +
      +
    • +

      In this setting, the model encounters a sequence of tasks and cannot revisit a previous task.

      +
    • +
    • +

      Permuted MNIST dataset was used.

      +
    • +
    • +

      The key takeaway is that once a task is catastrophically forgotten, only a few gradient updates on a carefully selected data, are sufficient to recover the performance.

      +
    • +
    +
  • +
  • +

    Incremental Learning:

    + +
      +
    • +

      In this setting, the model is trained on a subset of classes and then introduced to novel, unseen classes. The model is tested to see if it can incorporate the new knowledge while retaining the knowledge about the previous classes.

      +
    • +
    • +

      Imagenet dataset with Resnet V1 model is used. It is first pretrained on 500 classes and then fine-tuned to see how quickly could it adapt to new classes.

      +
    • +
    +
  • +
  • +

    Unbalanced Dataset:

    + +
      +
    • This setting is similar to the incremental learning setting with the key difference that once the model has been trained on a part of the dataset and is to be finetuned to acquire new knowledge, the dataset used for finetuning is much smaller than the initial dataset thus creating the effect of unbalanced datasets.
    • +
    +
  • +
  • +

    Language Modelling:

    + +
      +
    • MbPA is used to adapt to the shift in the word distribution that is common to language modelling tasks. PTB and WikiText datasets were used.
    • +
    +
  • +
  • +

    MbPA exhibits strong performance on all these tasks showing that the memory-based parameter adaption technique is effective across a range of tasks in supervised learning.

    +
  • +
diff --git a/_site/site/2018/07/11/Learning-Independent-Causal-Mechanisms.html b/_site/site/2018/07/11/Learning-Independent-Causal-Mechanisms.html new file mode 100644 index 00000000..dc2757e3 --- /dev/null +++ b/_site/site/2018/07/11/Learning-Independent-Causal-Mechanisms.html @@ -0,0 +1,124 @@ +

Introduction

+ +
    +
  • +

    The paper presents a very interesting approach for learning independent (inverse) data transformation from a set of transformed data points in an unsupervised manner.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Formulation

+ +
    +
  • +

    We start with a given data distribution P (say the MNIST dataset) where each x ε Rd.

    +
  • +
  • +

    Consider N transformations M1, …, MN (functions that map input x to transformed input x’). Note that N need not be known before hand.

    +
  • +
  • +

    These transformations can be thought of as independent (from other transformations) causal mechanisms.

    +
  • +
  • +

    Applying these transformation would give N new distributions Q1, …, QN.

    +
  • +
  • +

    These individual distributions are combined to form a single transformed distribution Q which contains the union of samples from the individual distributions.

    +
  • +
  • +

    At training time, two datasets are created. One dataset corresponds to untransformed objects (sampled from P), referred to as DP. The other dataset corresponds to samples from the transformed distribution Q and is referred to as DQ.

    +
  • +
  • +

    Note that all the samples in DP and DQ are sampled independently and no supervising information is needed.

    +
  • +
  • +

    A series of N’ parametric models, called as experts, are initialized and would be trained to learn the different mechanisms.

    +
  • +
  • +

    For simplicity, assume that N = N’. If N > N’, some experts would learn more than one transformation or certain transformations would not be learnt. If N < N’, some experts would not learn anything or some experts would learn the same distribution. All of these cases can be diagnosed and corrected by changing the number of experts.

    +
  • +
  • +

    The experts are trained with the goal of maximizing an objective parameter c: Rd to R. c takes high values on the support of P and low values outside.

    +
  • +
  • +

    During training, an example xQ (from DQ) is fed to all the experts at the same time. Each expert produces a value cj = c(Ej(xQ))

    +
  • +
  • +

    The winning expert is the one whose output is the max among all the outputs. Its parameters are updated to maximise its output while the other experts are not updated.

    +
  • +
  • +

    This forces the best performing model to become even better and hence specialize.

    +
  • +
  • +

    The objective c comes from adversarial training where a discriminator network discriminates between the untransformed input and the output of the experts.

    +
  • +
  • +

    Each expert can be thought of as a GAN that conditions on the input xQ (and not on a noise vector). The output of the different experts is fed to the discriminator which provides both a selection mechanism and the gradients for training the experts.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Experiments are performed on the MNIST dataset using the transformations like translation along 4 directions and along 4 diagonals, contrast shift and inversion.

    +
  • +
  • +

    The discriminator is further trained against the output of all the losing experts thereby furthering strengthing the winning expert.

    +
  • +
+ +

Approximate Identity Initialization

+ +
    +
  • +

    The experts are initialized randomly and then pretrained to approximate the identity function by training with identical input-output pairs.

    +
  • +
  • +

    This ensures that the experts start from a similar level.

    +
  • +
  • +

    In practice, it seems necessary for the success of the proposed approach.

    +
  • +
+ +

Observations

+ +
    +
  • +

    During the initial phase, there is a heavy competition between the experts and eventually different winners emerge for different transformations.

    +
  • +
  • The approximate quality of reconstructed output was also evaluated using a downstream task. +
      +
    • 3 type of inputs were created: +
        +
      • Untransformed images
      • +
      • Transformed images
      • +
      • Transformed images a being processed by experts.
      • +
      +
    • +
    • These inputs are fed to a pretrained MNISTN classifier.
    • +
    • The classifier performs poorly on the transformed images while the performance for images processed by experts quickly catches up with the performance on untransformed images.
    • +
    +
  • +
  • The experts Ei generalize on the data points from a different dataset as well. +
      +
    • To test the generalisation capabilities of the expert, a sample of data from the omniglot dataset is transformed and fed to experts (which are trained only on MNIST).
    • +
    • Each expert consistently applies the same transformation even though the inputs are outside the training domain.
    • +
    • This suggests that the experts have generalized to different transformations irrespective of the underlying dataset.
    • +
    +
  • +
+ +

Comments

+ +
    +
  • +

    The experiments are quite limited in terms of complexity of dataset and complexity of transformation but it provides evidence for a promising connection between deep learning and causality.

    +
  • +
  • +

    Appendix mentions that in case there are too many experts, for most of the tasks, only one model specialises and the extra experts do not specialize at all. This is interesting as there is no explicit regularisation penalty which prevents the emergence of multiple experts per task.

    +
  • +
diff --git a/_site/site/2018/07/19/Kronecker-Recurrent-Units.html b/_site/site/2018/07/19/Kronecker-Recurrent-Units.html new file mode 100644 index 00000000..a88eb6ca --- /dev/null +++ b/_site/site/2018/07/19/Kronecker-Recurrent-Units.html @@ -0,0 +1,156 @@ +

Introduction

+ +
    +
  • +

    Recurrent Neural Networks have two key issues:

    + +
      +
    • +

      Over parameterization which increases the time for training and inference.

      +
    • +
    • +

      Ill conditioned recurrent weight matrix which makes training difficult due to vanishing or exploding gradients.

      +
    • +
    +
  • +
  • +

    The paper presents a flexible RNN model called as KRU (Kronecker Recurrent Units) which overcomes the above problems by using a Kronecker factored recurrent matrix and soft unitary constraints on the factors.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ + + +

Existing solutions for overparameterization

+ +
    +
  • +

    Low-rank decomposition.

    +
  • +
  • +

    Training a neural network on the soft targets predicted by a big pre-trained network.

    +
  • +
  • +

    Low-bit precision training.

    +
  • +
  • +

    Hashing.

    +
  • +
+ +

Existing solutions for vanishing and exploding gradients

+ +
    +
  • +

    Gating mechanism like in LSTMs.

    +
  • +
  • +

    Gradient Clipping.

    +
  • +
  • +

    Orthogonal Weight Initialization.

    +
  • +
  • +

    Parameterizing recurrent weight matrix.

    +
  • +
+ +

KRU

+ +
    +
  • +

    Uses a Kronecker factored recurrent matrix which enables controlling the number of parameters and number of factor matrices.

    +
  • +
  • +

    Vanishing and exploding gradients are taken care of by using a soft unitary constraint.

    +
  • +
  • +

    Why not use strict unitary constraint:

    + +
      +
    • +

      Restricts the search space and makes learning process unstable.

      + +
        +
      • +

        Makes forgetting (irrelevant) information difficult.

        +
      • +
      • +

        Relaxing the strict constraint has shown to improve the convergence speed and generalization performance.

        +
      • +
      +
    • +
    +
  • +
  • +

    KRU can be easily plugged into RNNs, LSTMs and other variants.

    +
  • +
  • +

    The recurrent matrix W is paramterized as a kronecker product of F matrices W0, …, WF-1 where each Wf is a complex matrix of shape Pf x Qf and the product of all Pf and producto of all Qf are both equal to N.

    +
  • +
  • +

    Why is W a complex matrix?

    + +
      +
    • +

      In the real space, the set of all unitary matrices have the determinant as 1 or -1.

      + +
        +
      • +

        Given that determinant is a continuous function, the unitary set in the real space is disconnected.

        +
      • +
      • +

        The unitary set in the complex space is connected as its determinants are points on the unit circle.

        +
      • +
      +
    • +
    +
  • +
+ +

Soft Unitary Constraint

+ +
    +
  • + + + + + + + + + + +
    A soft unitary constraint is introduced in the form of regularization term WfHWf - I 2 (per kronecker factored recurrent matrix).
    +
  • +
  • +

    If each of the Kronecker factors is unitary, the resulting matrix W would also be unitary.

    +
  • +
  • +

    It is computationally inefficient to apply this constraint over the recurrent matrix W itself as the complexity of the regularizer is given as O(N3).

    +
  • +
  • Use of Kronecker factorisation makes it computationally feasible to use this regulariser.
  • +
+ +

Experiment

+ +
    +
  • +

    The Kronecker recurrent model is compared against the existing recurrent models for multiple tasks including copy memory, adding memory, pixel-by-pixel MNIST, char level language models, polyphonic music modelling, and framewise phoneme classification.

    +
  • +
  • +

    For most of the task, KRU model produces results comparable to the best performing models despite using fewer parameters.

    +
  • +
  • +

    Using soft unitary constraints in KRU provides a principled alternative to gradient clipping (a common heuristic to avoid exploding gradients).

    +
  • +
  • +

    Further, recent theoretical results suggest the gradient descent converges to a global optimizer of linear recurrent networks even if the learning problem is non-convex provided that the spectral norm of the recurrent matrix is bound by 1.

    +
  • +
  • +

    The key take away from the paper is that state should be high dimensional so that high capacity network can be used for encoding and decoding the input and output. The recurrent dynamics should be implemented via a low capacity model.s per task.

    +
  • +
diff --git a/_site/site/2018/08/08/Imagination-Augmented-Agents-for-Deep-Reinforcement-Learning.html b/_site/site/2018/08/08/Imagination-Augmented-Agents-for-Deep-Reinforcement-Learning.html new file mode 100644 index 00000000..2a9e77e7 --- /dev/null +++ b/_site/site/2018/08/08/Imagination-Augmented-Agents-for-Deep-Reinforcement-Learning.html @@ -0,0 +1,82 @@ +
    +
  • +

    The paper presents I2A (Imagination Augmented Agent) that combines the model-based and model-free approaches leading to data efficiency and robustness even with imperfect models.

    +
  • +
  • +

    I2A agent uses the predictions from a learned environment model as an additional context in deep policy networks. This leads to improved data efficiency and robustness to imperfect models.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    I2A agent has two main modules - Imagination module and the Policy module.

    +
  • +
  • +

    Imagination Module

    + +
      +
    • Environment Model +
        +
      • This is a recurrent model, trained in an unsupervised manner using the agent trajectories. It can be used to predict the future state given the current state and action.
      • +
      • The environment model can be rolled out multiple times to obtain a simulated trajectory or an “imagined” trajectory.
      • +
      • During each rollout, the actions are chosen using a rollout policy πr.
      • +
      +
    • +
    • Rollout Encoder +
        +
      • A rollout encoder E (LSTM) is used to process the entire imagined rollout.
      • +
      +
    • +
    • The imagination module is used to generate n trajectories. Each trajectory is a sequence of outputs of the environment model.
    • +
    • These n trajectories are concatenated into a single “imagination” vector.
    • +
    • The training data for the environment model is generated from trajectories of a partially trained model-free agent.
    • +
    • Pretraining the environment model (instead of joint training with policy) leads to faster runtime.
    • +
    +
  • +
  • +

    Policy Module

    + +
      +
    • This module uses the output of both model-based path and model-free path as its input. It generates the policy vector and value function.
    • +
    +
  • +
  • Rollout Strategy +
      +
    • One rollout is performed for each possible action in the environment ie, the first action in the ith rollout is the ith action in the action set.
    • +
    • Subsequent actions are generated using a shared rollout policy π
    • +
    • An effective strategy was to create a small model-free network π(ot) and then add a KL loss component that encourages π(ot)to be similar to the imagination augmented policy π(ot).
    • +
    +
  • +
  • Baselines +
      +
    • Model-free agent
    • +
    • Copy-model agent - same as I2A but the environment model is replaced by a “copy” model that just returns the input observations.
    • +
    +
  • +
  • Environments +
      +
    • Sokoban +
        +
      • Task is to push a number of boxes onto given target locations.
      • +
      • I2A outperforms the baselines and gains in performance as the number of unrolling steps increases (though at a diminishing rate).
      • +
      • In case of poor environment models, the agent seems to be able to ignore the later part of the rollout when the error starts to accumulate.
      • +
      • Monte Carlo search algorithm (without an explicit rollout encoder) performed poorly as compared to the model using rollout encoder.
      • +
      • Predicting the reward along with value function and action seems to speed up training.
      • +
      • If a near-perfect model is available, I2A agent’s performance can be improved by performing Monte Carlo search with the trained I2A agent for the rollout policy. The agent plays entire episodes in simulation and tries to find a successful action sequence within 10 retries.
      • +
      +
    • +
    • MiniPacman +
        +
      • I2A agent is evaluated to see if a single model can be used to solve multiple tasks.
      • +
      • A new environment is designed to define multiple tasks in an environment with shared state transitions.
      • +
      • Each task is specified by a 5-dimensional reward vector that associates a reward with moving, eating food, eating a pill, eating a ghost and being eaten by a ghost.
      • +
      • A single environment model is trained to predict both observations (frames) and events (eg “eating a pill”). This way, the environment model is shared across all tasks.
      • +
      • Baseline agents and I2As are trained on each task separately. I2A architecture outperforms the standard agent in all tasks and the copy-model +baseline in all but one task.
      • +
      • The improvement in performance is higher for tasks where rewards are sparse and where the anticipation +of ghost dynamics is especially important indicating that the I2A agent can use the environment model to explore the environment more effectively.
      • +
      +
    • +
    +
  • +
diff --git a/_site/site/2018/08/16/Hierarchical-Graph-Representation-Learning-with-Differentiable-Pooling.html b/_site/site/2018/08/16/Hierarchical-Graph-Representation-Learning-with-Differentiable-Pooling.html new file mode 100644 index 00000000..db40fe91 --- /dev/null +++ b/_site/site/2018/08/16/Hierarchical-Graph-Representation-Learning-with-Differentiable-Pooling.html @@ -0,0 +1,133 @@ +

Introduction

+ +
    +
  • +

    Most existing GNN (Graph Neural Network) methods are inherently flat and are unable to process the information in a hierarchical manner.

    +
  • +
  • +

    The paper proposes a differentiable graph pooling operation, DIFFPOOL, that can generate hierarchical graph representations and can be easily plugged into many GNN architectures.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Key Idea

+ +
    +
  • +

    CNNs have spatial pooling operation that allows for deep CNN architectures to operate on coarse graph representations of input images.

    +
  • +
  • +

    This notion cannot be applied as-is to graphs as they do not have a natural notion of spatial locality like images do.

    +
  • +
  • +

    DIFFPOOL attempts to resolve this problem by learning a differentiable soft-assignment at each layer which is equivalent to pooling the cluster of nodes to obtain a sparse representation.

    +
  • +
+ +

Approach

+ +
    +
  • +

    Given a graph G(A, F), where A is the adjacency matrix and F is the feature matrix.

    +
  • +
  • +

    Given a permutation invariant GNN that follows the message passing architecture. The output of this GNN can be expressed as Z = GNN(A, X) where X is the current feature matrix.

    +
  • +
  • +

    Goal is to stack L GNN layers on top of each other such that the lth layer uses coarsened output from the (l-1)th layer.

    +
  • +
  • +

    This coarsening operation uses a cluster assignment matrix S.

    +
  • +
  • +

    The learned cluster assignment matrix at layer l is denoted at Sl

    +
  • +
  • +

    Given Sl, the embedding matrix for the (l+1)th layer is given as transpose(Sl)Zl and adjancecy matrix is given by transpose(Sl)AlSl

    +
  • +
  • +

    A new GNN, called as GNNpool is used to produce the assignment matrix S by taking a softmax over GNNpool(Al, Xl)

    +
  • +
  • +

    As long as the GNN model is permutation invariant, the resulting DIFFPOOL model is also permutation invariant.

    +
  • +
+ +

Auxiliary Losses

+ +
    +
  • +

    The paper uses 2 auxiliary losses to push the model away from spurious local minima early in the training.

    +
  • +
  • +

    Link prediction objective - at each layer, link prediction loss ( = A - S(transpose(S))) is minimized with the intuition that the nearby nodes should be pooled together.

    +
  • +
  • +

    Ideally, the cluster assignment for each node should be a one-hot vector so the entropy for cluster assignment per node is regularized.

    +
  • +
+ +

Baselines

+ +
    +
  • GNN based models +
      +
    • GraphSage +
        +
      • Mean pooling
      • +
      • Set2Set pooling
      • +
      • Sort pooling
      • +
      +
    • +
    • Structure2vec
    • +
    • Edge conditioned filters in CNN
    • +
    • PatchySan
    • +
    +
  • +
  • Kernel based models +
      +
    • Graphlet, shortest path etc
    • +
    +
  • +
+ +

Model Variants

+ +
    +
  • GraphSage +
      +
    • Mean pool + Diff pool (3 or 2 layers)
    • +
    +
  • +
  • Structure2Vec + Diffpool
  • +
  • Diffpool-Det +
      +
    • The assignment matrix S are generated using graph clustering algorithms.
    • +
    +
  • +
  • Diffpool-NoLP +
      +
    • The link prediction objective function is turned off.
    • +
    +
  • +
  • At each DiffPool layer, the number of classes is set to 25% of the number of nodes before the DiffPool layer.
  • +
+ +

Results

+ +
    +
  • +

    DiffPool obtains the highest average performance across all the pooling approaches and improves upon the base GraphSage architecture by an average of around 7%.

    +
  • +
  • +

    In terms of runtime complexity, the paper reports that DiffPool does not incur any significant additional running time. But given that now there are 2 GNN models per layer, the size of the model should increase.

    +
  • +
  • +

    DiffPool can capture hierarchical community structure even when trained on just the graph classification loss.

    +
  • +
  • +

    One advantage of DiffPool is that the nodes are pooled in a non-uniform way so densely connected group of nodes would collapse into one cluster while sparsely connected nodes can retain their identity.

    +
  • +
diff --git a/_site/site/2018/08/21/A-Semantic-Loss-Function-for-Deep-Learning-with-Symbolic-Knowledge.html b/_site/site/2018/08/21/A-Semantic-Loss-Function-for-Deep-Learning-with-Symbolic-Knowledge.html new file mode 100644 index 00000000..e2421dff --- /dev/null +++ b/_site/site/2018/08/21/A-Semantic-Loss-Function-for-Deep-Learning-with-Symbolic-Knowledge.html @@ -0,0 +1,155 @@ +

Introduction

+ +
    +
  • +

    The paper proposes an approach for using symbolic knowledge in deep learning systems. These constraints are often expressed as boolean constraints on the output of the deep learning system and directly incorporating these constraints break the differentiability of the system.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Problem Setting

+ +
    +
  • +

    The model is given some input data to perform predictions and symbolic knowledge is provided in form of boolean constraints like exactly-one constraint for one-hot output encoding.

    +
  • +
  • +

    Most approaches tend to encode the symbolic knowledge in the vector space embedding to keep the model pipeline differentiable. In this process, the precise meaning of symbolic knowledge is often lost.

    +
  • +
  • +

    A differentiable “semantic loss” is derived which captures the meaning of the constraint while being independent of its syntax.

    +
  • +
+ +

Terminology

+ +
    +
  • +

    A state x (state refers to the instantiation of boolean variables) satisfies a sentence a if a evaluates to true when using the variables as specified by x.

    +
  • +
  • +

    A sentence a entails another sentence b if all states that satisfy a also satisfy b.

    +
  • +
  • +

    The row output vector of the neural network is denoted as p where each value in p denotes the probability of an output.

    +
  • +
  • +

    Three different output constraints are studied:

    + +
      +
    • +

      Exactly-one constraint

      + +
        +
      • Exactly one value in p should be true.
      • +
      • Can be expressed in boolean logic as follows: Let (x1, x2, …, xn) be variables in p. Then (not xi or not xj) for all pair of variables and (x1 or x2 or … xn).
      • +
      +
    • +
    • Valid Simple Path Constraint +
        +
      • Set of edges must form a valid path.
      • +
      +
    • +
    • Ordering Constraint +
        +
      • Defining an ordering over the variables.
      • +
      +
    • +
    +
  • +
+ +

Semantic Loss

+ +
    +
  • +

    The semantic loss Ls(a, p) is a function of a propositional logic sentence a (the symbolic knowldge constraint) and p (output of the neural network).

    +
  • +
  • +

    a is defined over variables (x1, …, xn) and p is interpreted as a vector of probabilities corresponding to these variables xi’s.

    +
  • +
  • +

    The semantic loss is directly proportional to the negative log likelihood of generating a state that satisfies the constraints when sampling values according to the distribution p.

    +
  • +
+ +

Main Axioms and Insights

+ +
    +
  • Monotonicity +
      +
    • If a sentence a entails another sentence b then for any given p, Ls(a, p) > Ls(b, p) ie adding more constraints cannot decrease the semantic loss.
    • +
    +
  • +
  • Semantic Equivalence +
      +
    • If two sentences are logically equivalent, their semantic loss is the same.
    • +
    +
  • +
  • Identity +
      +
    • For any given sentence a, its representation as a sentence is equivalent to its representation as a deterministic vector ie writing the “one-hot” constraint as a boolean expression is equivalent to a one-hot vector.
    • +
    +
  • +
  • Satisfaction +
      +
    • If p entails the sentence a then Ls(a, p) = 0.
    • +
    +
  • +
  • Label-literal correspondence +
      +
    • When the constraint is defined in terms of a single variable, it can be interpreted as the supervised label.
    • +
    • Hence the semantic loss in case of a single variable should be equivalent to the cross-entropy loss.
    • +
    +
  • +
  • Truth +
      +
    • The semantic loss of a true sentence is 0
    • +
    +
  • +
  • Non-negativity +
      +
    • Semantic loss should always be non-negative.
    • +
    +
  • +
  • +

    Probabilities of variables that are not part of the constraint, do not affect the semantic loss.

    +
  • +
  • It can be shown that the semantic loss function satisfies all these axioms (and the other axioms specified in the paper) and is the only function to do so, up to a multiplicative constant.
  • +
+ +

Experimental Evaluation

+ +
    +
  • +

    Semantic Loss is used in the semi-supervised setting for Permuted MNIST, Fashion MNIST and CIFAR-10.

    +
  • +
  • +

    The key takeaway is that using semantic loss improves the performance of the state-of-the-art models for Fashion MNIST and CIFAR-10.

    +
  • +
  • +

    One downside is that the effectiveness of the semantic loss in this type of constraint strongly depends on the performance of the underlying model. Further, the semantic loss does not improve the performance in case of fully supervised scenario.

    +
  • +
  • +

    Further experiments are performed to evaluate the performance of the semantic loss on complex constraints. Since these tasks aim to highlight the effect of using semantic loss, only simple models (MLPs) are evaluated.

    +
  • +
+ +

Tractability of Semantic Loss

+ +
    +
  • +

    The semantic loss is similar to the automated reasoning task called as weight model counting (wmc).

    +
  • +
  • +

    Circuit compiler techniques can be used to compute wmc while allowing backpropagation.

    +
  • +
+ +

Notes

+ +
    +
  • The proposed idea is simple and intuitive and the results on semi-supervised classification task are quite good. It would be interesting to extend and scale this method for more complex constraints.
  • +
diff --git a/_site/site/2018/09/12/Emergence-of-Grounded-Compositional-Language-in-Multi-Agent-Populations.html b/_site/site/2018/09/12/Emergence-of-Grounded-Compositional-Language-in-Multi-Agent-Populations.html new file mode 100644 index 00000000..96d69bb8 --- /dev/null +++ b/_site/site/2018/09/12/Emergence-of-Grounded-Compositional-Language-in-Multi-Agent-Populations.html @@ -0,0 +1,121 @@ +

Introduction

+ +
    +
  • +

    The paper provides a multi-agent learning environment and proposes a learning approach that facilitates the emergence of a basic compositional language.

    +
  • +
  • +

    The language is quite rudimentary and is essentially a sequence of abstract discrete symbols. But it does comprise of a defined vocabulary and syntax.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Cooperative, partially observable Markov game (multi-agent extension of MDP).

    +
  • +
  • +

    All agents have identical action and observation spaces, use the same policy and receive a shared reward.

    +
  • +
+ +

Grounded Communication Environment

+ +
    +
  • +

    Physically simulated 2-D environment in continuous space and discrete time with N agents and M landmarks.

    +
  • +
  • +

    The agents and the landmarks would occupy some location and would have some attributes (colour, shape).

    +
  • +
  • +

    Within the environment, the agents can go to a location, look at a location or do nothing. Additionally, they can utter communication symbols c (from a shared vocabulary C). Agents themselves learn to assign a meaning to the symbols.

    +
  • +
  • +

    Each agent has an internal goal (which could require interaction with other agents to complete) which the other agents cannot see.

    +
  • +
  • +

    Goal for agent i consists of an action to perform, a landmark location where to perform the action and another agent who should be performing the action.

    +
  • +
  • +

    Since the agent is continuously emitting symbols, a memory module is provided and simple additive memory updates are done.

    +
  • +
  • +

    For interaction, the agents could use verbal utterances, non-verbal signals (gaze) or non-communicative strategies (pushing other agents).

    +
  • +
+ +

Approach

+ +
    +
  • +

    A model of all agent and environment state dynamics is created over time and the return gradient is computed.

    +
  • +
  • +

    Gumbel-Softmax distribution is used to obtain categorical word emission c.

    +
  • +
  • +

    A multi-layer perceptron is used to model the policy which returns action, communication symbol and the memory update for each agent.

    +
  • +
  • +

    Since the number of agents (and hence the number of communication streams etc) can vary across instantiations, an identical model is instantiated per agent and per communication stream.

    +
  • +
  • +

    The output of individual processing modules are pooled into feature vectors corresponding to communication and physical observations. These pooled features and the goal vectors are fed to the final processing module from which actions and categorical symbols are sampled.

    +
  • +
  • +

    In practice, using an additional task (each agent predicts the goal for another agent) encouraged more meaningful communication utterances.

    +
  • +
+ +

Compositionality and Vocabulary Size

+ +
    +
  • +

    Authors recommend using a large vocabulary with a soft penalty that discourages use of too many words. This leads to use of a large vocabulary in the intermediate state which converges to a small vocabulary.

    +
  • +
  • +

    Along the lines of rich gets richer dynamics, the communication symbol c’s are modelled as being generated by a Dirichlet process. The resulting reward across all agents is the log-likelihood of all communication utterances to have been generated by a Dirichlet process.

    +
  • +
  • +

    Since the agents can only communicate in discrete symbols and do not have a global positioning reference, they need to unambiguously communicate landmark references to other agents.

    +
  • +
+ +

Case I - Agents can not see each other

+ +
    +
  • +

    Non-verbal communication is not possible.

    +
  • +
  • +

    When trained with just 2 agents, symbols are assigned for each landmark and action.

    +
  • +
  • +

    As the number of agents is increased, additional symbols are used to refer to agents.

    +
  • +
  • +

    If the agents of the same colour are asked to perform conflicting tasks, they perform the average of conflicting tasks. If distractor locations are added, the agents learn to ignore them.

    +
  • +
+ +

Non-verbal communication

+ +
    +
  • +

    Agents are allowed to observe other agents’ position, gaze etc.

    +
  • +
  • +

    Now the location can be pointed to using gaze.

    +
  • +
  • +

    If gaze is disabled, the agent could indicate the goal landmark by moving to it.

    +
  • +
  • +

    Basically even when the communication is disabled the agents can come up with strategies to complete the task.

    +
  • +
diff --git a/_site/site/2018/09/27/HoME-a-Household-Multimodal-Environment.html b/_site/site/2018/09/27/HoME-a-Household-Multimodal-Environment.html new file mode 100644 index 00000000..a5964df2 --- /dev/null +++ b/_site/site/2018/09/27/HoME-a-Household-Multimodal-Environment.html @@ -0,0 +1,103 @@ +

Introduction

+ +
    +
  • +

    Environment for learning using modalities like vision, audio, semantics, physics and interaction with objects and other agents.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Motivation

+ +
    +
  • +

    Humans learn by interacting with their surroundings (environment).

    +
  • +
  • +

    Similarly training an agent in an interactive multi-model environment (virtual embodiment) could be useful for a learning agent.

    +
  • +
+ +

Characteristics

+ +
    +
  • +

    Open-source and Open-AI gym compatible

    +
  • +
  • +

    Built on top of 45000 3D house layouts from SUNCG dataset.

    +
  • +
  • +

    Provides both 3D visual and audio recording.

    +
  • +
  • +

    Semantic image segmentation and langauge description of objects.

    +
  • +
+ +

Components

+ +
    +
  • +

    Rendering Engine

    + +
      +
    • +

      Implemented using Panda 3D game engine.

      +
    • +
    • +

      Renders RGB+depth scenes based on textures, multi-source lightings and shadows.

      +
    • +
    +
  • +
  • +

    Acoustic Engine

    + +
      +
    • +

      Implemented using EVERT

      +
    • +
    • +

      Supports multiple microphones, sound sources, sound absorption based on material, atmospheric conditions etc.

      +
    • +
    +
  • +
  • +

    Semantics Engine

    + +
      +
    • Provides a short textual description for each object, along with information like color, category, material size, location etc.
    • +
    +
  • +
  • +

    Physics Engine

    + +
      +
    • +

      Implemented using Bullet3 Engine

      +
    • +
    • +

      Supports physical interaction, external forces like gravity and position and velocity information for multiple agents.

      +
    • +
    +
  • +
+ +

Potential Applications

+ +
    +
  • +

    Visual Question Answering

    +
  • +
  • +

    Conversational Agents

    +
  • +
  • +

    Training an agent to follow instructions

    +
  • +
  • +

    Multi-agent communication

    +
  • +
diff --git a/_site/site/2018/10/04/When-Recurrent-Models-Don-t-Need-To-Be-Recurrent.html b/_site/site/2018/10/04/When-Recurrent-Models-Don-t-Need-To-Be-Recurrent.html new file mode 100644 index 00000000..4f9bbcbf --- /dev/null +++ b/_site/site/2018/10/04/When-Recurrent-Models-Don-t-Need-To-Be-Recurrent.html @@ -0,0 +1,62 @@ +

Introduction

+ +
    +
  • +

    The paper explores “if a well behaved RNN can be replaced by a feed-forward network of comparable size without loss in performance.”

    +
  • +
  • +

    “Well behaved” is defined in terms of control-theoretic notion of stability. This roughly requires that the gradients do not explode over time.

    +
  • +
  • +

    The paper shows that under the stability assumption, feedforward networks can approximate RNNs for both training and inference. The results are empirically validated as well.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Problem Setting

+ +
    +
  • +

    Consider a general, non linear dynamical system given by a differential state transition map Φw. The hidden ht = Φw(ht-1, xt).

    +
  • +
  • +

    Assumptions:

    + +
      +
    • Φ is smooth in w and h.
    • +
    • h0 = 0
    • +
    • Φw(0, 0) = 0 (can be ensured by translation)
    • +
    +
  • +
  • +

    Stable models are the ones where Φ is contractive ie Φw(h, x) - Φw(h’, x) is less than Λ * (h - h’)

    +
  • +
  • +

    For example, in RNN, stability would require that norm(w) is less than (Lp)-1 where Lp is the Lipschitz constant of the point-wise non linearity used.

    +
  • +
  • +

    The feedforward approximation uses a finite context (of length k) and is a truncated model.

    +
  • +
  • +

    A non-parametric function f maps the output of the recurrent model to prediction. If f is desired to be a parametric model, its parameters can be pushed to the recurrent model.

    +
  • +
+ +

Theoretical Results

+ +
    +
  • +

    For a Λ-contractive system, it can be proved that for a large k (and additional Lipschitz assumptions) the difference in prediction between the recurrent and truncated mode is negligible.

    +
  • +
  • +

    If the recurrent model and truncated feed-forward network are initialized at the same point and trained over the same input for N-step, then for an optimal k, the weights of the two models would be very close in the Euclidean space. It can be shown that this small difference does not lead to large gradient differences during subsequent update steps.

    +
  • +
  • +

    This can be roughly interpreted as - if the gradient descent can train a stable recurrent network, it can also train a feedforward model and vice-versa.

    +
  • +
  • +

    The stability condition is important as, without that, truncated models would be bad (even for large values of k). Further, it is difficult to show that gradient descent converges to a stationary point.

    +
  • +
diff --git a/_site/site/2018/10/11/Poincare-Embeddings-for-Learning-Hierarchical-Representations.html b/_site/site/2018/10/11/Poincare-Embeddings-for-Learning-Hierarchical-Representations.html new file mode 100644 index 00000000..7ac9211d --- /dev/null +++ b/_site/site/2018/10/11/Poincare-Embeddings-for-Learning-Hierarchical-Representations.html @@ -0,0 +1,130 @@ +

Introduction

+ +
    +
  • +

    Much of the work in representation leaning uses Euclidean vector spaces to embed datapoints (like words, nodes, entities etc).

    +
  • +
  • +

    This approach is not effective when data has a (latent) hierarchical structure.

    +
  • +
  • +

    The paper proposes to compute the embeddings in the hyperbolic space so as to preserve both the similarity and structure information.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Hyperbolic Geometry

+ +
    +
  • +

    Hyperbolic spaces are spaces with a constant negative curvature while Euclidean spaces have zero curvature.

    +
  • +
  • +

    The hyperbolic disc area and circle length increase exponentially with the radius r while in Euclidean space, it increases quadratically and linearly respectively.

    +
  • +
  • +

    This makes the hyperbolic space more suitable for embedding tree-like structures where the number of nodes increases as we move away from the root.

    +
  • +
  • +

    Hyperbolic spaces can be thought of as the continuous version of trees and trees can be thought of as the discrete version of hyperbolic spaces.

    +
  • +
+ +

Poincare Embeddings

+ +
    +
  • +

    Poincare model is one of the several possible models of the hyperbolic space and is considered here as it is more amenable to gradient-based optimisation.

    +
  • +
  • +

    Distance between 2 pints change smoothly and is symmetric. Thus the hierarchical organisation only depends on the distance from the origin which makes the model applicable in settings where the hierarchical structure needs to be inferred from the data.

    +
  • +
  • +

    Eventually the norm of a point represents its hierarchy and distance between the points represents similarity.

    +
  • +
+ +

Optimization

+ +
    +
  • RSGD (Riemannian SGD) method is used.
  • +
  • Riemannian gradients can be computed from the Euclidean gradients by rescaling with the inverse of the Poincare ball metric tensor.
  • +
  • The embeddings are constrained to be within the Poincare ball by projection operation which normalizes the magnitude of embeddings to be 1.
  • +
+ +

Training Details

+ +
    +
  • Initializing the embeddings close to 0 (by sampling uniformly from (-0.001, 0.001)) helps.
  • +
  • The model is trained for an initial burn-out period of 10 epochs with 0.1 times the learning rate so as to find a better initial angular layout.
  • +
+ +

Evaluation

+ +
    +
  • +

    Embedding taxonomy for wordnet task

    + +
      +
    • +

      Setup

      + +
        +
      • Reconstruction
      • +
      • Link Prediction
      • +
      +
    • +
    • +

      The input data is a collection of a pair of words (u, v) which are related to each other.

      +
    • +
    • +

      For each word pair, 10 negative samples of the form (u, v’) are sampled and the training procedure uses a soft ranking loss that aims to bring the related objects closer together.

      +
    • +
    +
  • +
  • +

    Network Embedding

    + +
      +
    • +

      Baselines

      + +
        +
      • Euclidean Embeddings
      • +
      • Translational Embedding where a relation vector corresponding to the edge type is also learnt.
      • +
      +
    • +
    • +

      Datasets

      + +
        +
      • ASTROPH
      • +
      • CONDMAT
      • +
      • GRQC
      • +
      • HEPPH
      • +
      +
    • +
    +
  • +
  • +

    Lexical Entailment

    +
  • +
+ +
* Hyperlex - Gold standard to evaluate how well the semantics models capture lexical entailment on a scale of [0, 10].
+
+* The key takeaway is that for all the datasets/setups, hyperbolic embeddings give a performance benefit when the embedding dimension is small.
+
+ +

Challenges

+ +
    +
  • +

    Hyperbolic embeddings are not suitable for all the datasets. Eg if the dataset is not tree-like or has cycles.

    +
  • +
  • +

    Hyperbolic embeddings are difficult to optimize as each operation needs to be modified to be usable in the hyperbolic space.

    +
  • +
diff --git a/_site/site/2018/10/18/BabyAI-First-Steps-Towards-Grounded-Language-Learning-With-a-Human-In-the-Loop.html b/_site/site/2018/10/18/BabyAI-First-Steps-Towards-Grounded-Language-Learning-With-a-Human-In-the-Loop.html new file mode 100644 index 00000000..b9642c52 --- /dev/null +++ b/_site/site/2018/10/18/BabyAI-First-Steps-Towards-Grounded-Language-Learning-With-a-Human-In-the-Loop.html @@ -0,0 +1,154 @@ +

Introduction

+ +
    +
  • +

    BabyAI is a research platform to investigate and support the feasibility of including humans in the loop for grounded language learning.

    +
  • +
  • +

    The setup is a series of levels (of increasing difficulty) to train the agent to acquire a synthetic language (Baby Language) which is a proper subset of English language.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Motivation

+ +
    +
  • +

    BabyAI platform provides support for curriculum learning and interactive learning as part of its human-in-the-loop training setup.

    +
  • +
  • +

    Curriculum learning is incorporated by having a curriculum of levels of increasing difficulty.

    +
  • +
  • +

    Interactive learning is supported by including a heuristic expert which can provide new demonstrations on the fly to the learning agent.

    +
  • +
  • +

    The heuristic expert can be thought of as the human-in-the-loop which can guide the agent through the learning process.

    +
  • +
  • +

    One downside of human-in-the-loop is the poor sample complexity of the learning agent. The heuristic agent can be used to estimate the sample efficiency.

    +
  • +
+ +

Contribution

+ +
    +
  • +

    BabyAI research platform for grounded language learning with a simulated human-in-the-loop.

    +
  • +
  • +

    Baseline results for performance and sample efficiency for the different tasks.

    +
  • +
+ +

BabyAI Platform

+ +

Environment

+ +
    +
  • +

    MiniGrid - A partially observable 2D grid-world environment.

    +
  • +
  • +

    Entities - Agent, ball, box, door, keys

    +
  • +
  • +

    Actions - pick, drop or move objects, unlock doors etc.

    +
  • +
+ +

Baby Language

+ +
    +
  • +

    Synthetic Language (a proper subset of English) - Used to give instructions to the agent

    +
  • +
  • +

    Support for verifying if the task (and the subtasks) are completed or not

    +
  • +
+ +

Levels

+ +
    +
  • +

    A level is an instruction-following task.

    +
  • +
  • +

    Formally, a level is a distribution of missions - a combination of initial state of the environment and an instruction (in Baby Language)

    +
  • +
  • +

    Motivated by curriculum learning, the authors create a series of tasks (with increasing difficulty).

    +
  • +
  • +

    A subset of skills (competencies) is required for solving each task. The platform takes into account this constraint when creating a level.

    +
  • +
+ +

Heuristic Expert

+ +
    +
  • +

    The platform supports a Heuristic expert that simulates the role of a human teacher and knows how to solve each task.

    +
  • +
  • +

    For any level, it can suggest actions or generate demonstrations (given the state of the environment).

    +
  • +
+ +

Experiment

+ +
    +
  • +

    An imitation learning baseline is trained for each level.

    +
  • +
  • +

    Data requirement for each level and the benefits of curriculum learning and imitation learning are investigated (in terms of sample efficiency).

    +
  • +
+ +

Model Architecture

+ +
    +
  • +

    GRU to encode the sentence, CNN to encode the input observation

    +
  • +
  • +

    FiLM layer to combine the two representations

    +
  • +
  • +

    LSTM to encode the per-timestep FiLM encoding (timesteps in the environment)

    +
  • +
  • +

    Two model variants are considered:

    + +
      +
    • +

      Large Model - Bidirectional GRU + attention + large hidden state

      +
    • +
    • +

      Small Model - Unidirectional GRU + No attention + small hidden state

      +
    • +
    +
  • +
  • +

    Heuristic expert used to generate trajectory and the models are trained by imitation learning (to be used as baselines)

    +
  • +
+ +

Results

+ +
    +
  • +

    The key takeaway is that the current deep learning approaches are extremely sample inefficient when learning a compositional language.

    +
  • +
  • +

    Data efficiency of RL methods is much worse than that of imitation learning methods showing that the current imitation learning and reinforcement learning methods scale and generalize poorly.

    +
  • +
  • +

    Curriculum-based pretraining and interactive learning was found to be useful in only some cases.

    +
  • +
+ diff --git a/_site/site/2018/10/25/One-shot-Learning-with-Memory-Augmented-Neural-Networks.html b/_site/site/2018/10/25/One-shot-Learning-with-Memory-Augmented-Neural-Networks.html new file mode 100644 index 00000000..2659a3d5 --- /dev/null +++ b/_site/site/2018/10/25/One-shot-Learning-with-Memory-Augmented-Neural-Networks.html @@ -0,0 +1,121 @@ +

Introduction

+ +
    +
  • +

    The paper demonstrates that Memory Augmented Neural Networks (MANN) are suitable for one-shot learning by introducing a new method for accessing an external memory.

    +
  • +
  • +

    This method focuses on memory content while earlier methods additionally used memory location based focusing mechanisms.

    +
  • +
  • +

    Here, MANN refers to neural networks that have an external memory. This includes Neural Turning Machines (NTMs) and excludes LSTMs.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Meta-Learning

+ +
    +
  • +

    In meta-learning, a learner is learning at two levels.

    +
  • +
  • +

    The learner is shown a sequence of tasks D1, D2, …, DT.

    +
  • +
  • +

    When it is training on one of the datasets (say DT), it learns to solve the current dataset.

    +
  • +
  • +

    At the same time, the learner tries to incorporate knowledge about how task structure changes across different datasets (second level of learning).

    +
  • +
+ +

MANN + Meta Learning

+ +
    +
  • +

    Following are the desirable characteristics for a scalable, combined architecture:

    + +
      +
    • +

      Memory representation should be both stable and element-wise accessible.

      +
    • +
    • +

      Number of model parameters should not be tied to the size of the memory.

      +
    • +
    +
  • +
+ +

Task Setup

+ +
    +
  • +

    In standard learning, the goal is to reduce error on some dataset D. In meta-learning, the goal is to reduce the error across a distribution of datasets p(D).

    +
  • +
  • +

    Each dataset is presented to the model in the form (x1, null), (x1, y0), …, (xt+1, yt) where yt is the correct label (or value) corresponding to the inpuit xt.

    +
  • +
  • +

    Further, the data labels are shuffled from dataset to dataset.

    +
  • +
  • +

    The model must learn to hold the data samples in memory till the appropriate candidate labels are presented in the next step.

    +
  • +
  • +

    The idea is that a model that meta learns would learn to map data representation to correct labels regardless of the actual context of data representation or the label.

    +
  • +
  • +

    The paper uses NTM as the MANN with one modification.

    +
  • +
  • +

    In the original formulation, the memories were addressed by both context and location. Location-based addressing is not optimal for the current setup where information encoding is not independent of the sequence.

    +
  • +
  • +

    A new access module - LRUA - Least Recent Used Access - is used to write to memory.

    +
  • +
  • +

    LRUA is purely content-based and writes to either least used memory location (to preserve recent information) or most recently used memory location (to overwrite recent information with more relevant information). This is decided on the basis of interpolation between previous read weights and weights scaled according to the usage weight.

    +
  • +
+ +

Datasets

+ +
    +
  • +

    Omniglot (classification)

    +
  • +
  • +

    Sampled functions from Gaussian Processes

    +
  • +
+ +

Results

+ +
    +
  • +

    For the omniglot dataset, the model was trained with various combinations of randomly chosen classes with randomly chosen labels.

    +
  • +
  • +

    As baselines, following models were considered:

    + +
      +
    • Regular NTM
    • +
    • LSTM
    • +
    • Feedforward RNN
    • +
    • Nearest Neighbour Classifier
    • +
    +
  • +
  • +

    Since each episode (dataset created by the combination of classes) contains unique classes (with their own unique labels) it is important to clear the memory across different episodes.

    +
  • +
  • +

    For the regression task, the data was generated from a GP prior with a fixed set of hyper-parameters which resulted in different functions.

    +
  • +
  • +

    For both the tasks, the MANN architecture outperforms the LSTM architecture baseline NTMs.

    +
  • +
+ diff --git a/_site/site/2018/11/01/Learned-Optimizers-that-Scale-and-Generalize.html b/_site/site/2018/11/01/Learned-Optimizers-that-Scale-and-Generalize.html new file mode 100644 index 00000000..4dd86fa6 --- /dev/null +++ b/_site/site/2018/11/01/Learned-Optimizers-that-Scale-and-Generalize.html @@ -0,0 +1,70 @@ +

Introduction

+ +
    +
  • +

    The paper introduces a learned gradient descent optimizer that has low memory and computational overhead and that generalizes well to new tasks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Key Advantage

+ +
    +
  • +

    Uses a hierarchial RNN architecture augmented by features like adapted input an output scaling, momentum etc.

    +
  • +
  • +

    A meta-learning set of small diverse optimization tasks, with diverse loss landscapes is developed. The learnt optimizer generalizes to much more complex tasks and setups.

    +
  • +
+ +

Architecture

+ +
    +
  • +

    A hierarchical RNN is designed to act as a learned optimizer. This RNN is the meta-learner and its parameters are shared across different tasks.

    +
  • +
  • +

    The learned optimizer takes as input the gradient (and related metadata) for each parameter and outputs the update to the parameters.

    +
  • +
  • +

    At the lowest level of hierarchical, a small “parameter RNN” ingests the gradient (and related metadata).

    +
  • +
  • +

    One level up, an intermediate “Tensor RNN” incorporates information from a subset of Parameter RNNS (eg one Tensor RNN per layer of feedforward network).

    +
  • +
  • +

    At the highest level is the glocal RNN which receives input from all the Tensor RNNs and can keep track of weight updates across the task.

    +
  • +
  • +

    the input of each RNN is averaged and fed as input to the subsequent RNN and the output of each RNN is fed as bias to the previous RNN.

    +
  • +
  • +

    In practice, the hidden states are fixed at 10, 30 and 20 respectively.

    +
  • +
+ +

Features inspired from existing optimizers

+ +
    +
  • +

    Attention and Nesterov’s momentum

    + +
      +
    • +

      Attention mechanism is incorporated by attending to new regions of the loss surface (which are an offset from previous parameter location).

      +
    • +
    • +

      To incorporate momentum on multiple timescales, the exponential moving average of the gradient at several timescales is also provided as input.

      +
    • +
    • +

      The average gradients are rescaled (as in RMSProp and Adam)

      +
    • +
    • +

      Relative log gradient magnitudes are also provided as input so that the optimizer can access how the gradient magnitude changes with time.

      +
    • +
    +
  • +
diff --git a/_site/site/2018/12/11/Representation-Tradeoffs-for-Hyperbolic-Embeddings.html b/_site/site/2018/12/11/Representation-Tradeoffs-for-Hyperbolic-Embeddings.html new file mode 100644 index 00000000..93291e9d --- /dev/null +++ b/_site/site/2018/12/11/Representation-Tradeoffs-for-Hyperbolic-Embeddings.html @@ -0,0 +1,181 @@ +

Introduction

+ +
    +
  • +

    The paper describes a combinatorial approach to embed trees into hyperbolic spaces without performing optimization.

    +
  • +
  • +

    The resulting mechanism is analyzed to obtain dimensionality-precision tradeoffs.

    +
  • +
  • +

    To embed any metric spaces in the hyperbolic spaces, a hyperbolic generalization of the multidimensional scaling (h-MDS) is proposed.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Preliminaries

+ +
    +
  • +

    Hyperbolic Spaces

    + +
      +
    • +

      Have the “tree” like property ie the shortest path between a pair of points is almost the same as the path through the origin.

      +
    • +
    • +

      Generally, Poincare ball model is used given its advantages like conformity to the Euclidean spaces.

      +
    • +
    +
  • +
  • +

    Fidelity Measures

    + +
      +
    • +

      Mean Average Precision - MAP

      + +
        +
      • A local metric that ranks between distances of the immediate neighbors.
      • +
      +
    • +
    • +

      Distortion

      + +
        +
      • A global metric that depends on the underlying distances and not just the local relationship between distances.
      • +
      +
    • +
    +
  • +
+ +

Combinatorial Construction for embedding hierarchies into Hyperbolic spaces

+ +
    +
  • +

    Embed the given graph G = (V, E) into a tree T.

    +
  • +
  • +

    Embed the tree T into the poincare ball Hd of dimensionality d.

    +
  • +
+ +

Sarkar’s construction to embed points in a 2-d Poincare ball

+ +
    +
  • +

    Consider two points a and b (from the tree) where b is the parent of a.

    +
  • +
  • +

    Assume that a is embedded as f(a) and b is embedded as f(b) and the children of a needs to be embedded.

    +
  • +
  • +

    Reflect f(a) and f(b) across a geodesic such that f(a) is mapped to 0 (origin) while f(b) is mapped to some new point z.

    +
  • +
  • +

    Children of a are placed at points yi which are equally placed around a circle of radius (er - 1) / (er + 1) and maximally seperated from z, where r is the scaling factor.

    +
  • +
  • +

    Then all the points are reflected back across the geodesic so that all children are at a distance r from f(a).

    +
  • +
  • +

    To embed the tree itself, place the root node at the origin, place its children around it in a circle, then place their children and so on.

    +
  • +
  • +

    In this construct, precision scales logarithmically with the degree of the tree but linearly with the maximum path length.

    +
  • +
+ +

d-dimensional hyperbolic spaces

+ +
    +
  • +

    In the d-dimensional space, the points are embedded into hyperspheres (instead of circles).

    +
  • +
  • +

    The number of children node that can be placed for a particular angle grows with the dimension.

    +
  • +
  • +

    Increasing dimension helps with bushy trees (with high node degree).

    +
  • +
+ +

Hyperbolic multidimensional scaling (h-MDS)

+ +
    +
  • +

    Given the pairwise distance from a set of points in the hyperbolic space, how to recover the points?

    +
  • +
  • +

    The corresponding problem in the Euclidean space is solved using MDS.

    +
  • +
  • +

    A variant of MDS called as h-MDS is proposed.

    +
  • +
  • +

    MDS makes a centering assumption that points have 0 mean. In h-MDS, a new mean (called as the pseudo-Euclidean mean) is introduced to enable recovery via matrix factorization.

    +
  • +
  • +

    Instead of the Poincare model, the hyperboloid model is used (though the points can be mapped back and forth).

    +
  • +
+ +

pseudo-Euclidean Mean

+ +
    +
  • A set of points can always be centered without affecting their pairwise distance by simply finding their mean and sending it to 0 via isometry
  • +
+ +

Recovery via matrix factorization

+ +
    +
  • +

    Given the pairwise distances, a new matrix Y is constructed by applying cosh on the pairwise distances.

    +
  • +
  • +

    Running PCA on -Y recovers X up to rotation.

    +
  • +
+ +

Dimensionality Reduction with PGA (Principal Geodesic Analysis)

+ +
    +
  • +

    PGA is the counterpart of PCA in the hyperbolic spaces.

    +
  • +
  • +

    First the Karcher mean of the given points is computed.

    +
  • +
  • +

    All points xi are reflected so that their mean is 0 in the Poincare disk model.

    +
  • +
  • +

    Combining that with Euclidean reflection formula and hyperbolic metrics leads to a non-convex loss function which can be optimized using gradient descent algorithm.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets

    + +
      +
    • Trees: fully balanced and phylogenic trees expressing genetic heritage.
    • +
    • Tree-like hierarchy: WordNet hypernym and graph of Ph.D. advisor-advisee relationships.
    • +
    • No-tree like disease relationships, proteins interactions etc
    • +
    +
  • +
  • +

    Results

    + +
      +
    • Combinatorial construction outperforms approaches based on optimization in terms of both MAP and distortion.
    • +
    • eg on WordNet, the combinatorial approach achieves a MAP of 0.989 with just 2 dimensions while the previous best was 0.87 with 200 dimensions.
    • +
    +
  • +
+ diff --git a/_site/site/2018/12/18/Hindsight-Experience-Replay.html b/_site/site/2018/12/18/Hindsight-Experience-Replay.html new file mode 100644 index 00000000..027f438c --- /dev/null +++ b/_site/site/2018/12/18/Hindsight-Experience-Replay.html @@ -0,0 +1,76 @@ +

Introduction

+ +
    +
  • +

    Hindsight Experience Replay(HER) is a sample efficient technique to learn from sparse rewards.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    Assume a footballer misses the goal narrowly. Even though the player does not get any “reward”(in terms of goal), the player realizes that had the goal post been shifted a bit, it would have resulted in a goal(reward).

    +
  • +
  • +

    The same intuition is applied for the RL agent - let us say that the true goal state was g while the agent ends up in the state s.

    +
  • +
  • +

    While the action sequence is not useful for reaching the goal state g, it is indeed useful for reaching state s. Hence the trajectory could be replayed with the goal as s(and not g).

    +
  • +
+ +

Technical Details

+ +
    +
  • +

    Multi-goal policy trained using Universal Value Function Approximation (UVFA).

    +
  • +
  • +

    Every episode starts by sampling a start state and a goal state. Each goal has a different reward function.

    +
  • +
  • +

    Policy uses both the current state and the current goal state and leads to a state transition sequence s1, s2,…, sn.

    +
  • +
  • +

    Each of these transitions si -> si+1 are stored in a buffer with both the original goal and a subset of the other goals.

    +
  • +
  • +

    For the goal selection, following strategies are tried:

    + +
      +
    • +

      Future - goal state is the state k steps after observing the state transition.

      +
    • +
    • +

      Final - goal state is the final state of the current episode.

      +
    • +
    • +

      Episode - k random states are selected from the current episode.

      +
    • +
    • +

      Randon - k states are selected randomly.

      +
    • +
    +
  • +
  • +

    Any off-policy algorithm can be used. Specifically, DDPG is used.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Robotic arm simulated using MuJoCo for push, slide and pick and place tasks.

    +
  • +
  • +

    DDPG with and without HER evaluated on the 3 tasks.

    +
  • +
  • +

    DDPG with the HER variant significantly outperforms the baseline in all the cases.

    +
  • +
diff --git a/_site/site/2018/12/25/Smooth-Loss-Functions-for-Deep-Top-k-Classification.html b/_site/site/2018/12/25/Smooth-Loss-Functions-for-Deep-Top-k-Classification.html new file mode 100644 index 00000000..25cf763b --- /dev/null +++ b/_site/site/2018/12/25/Smooth-Loss-Functions-for-Deep-Top-k-Classification.html @@ -0,0 +1,117 @@ +

Introduction

+ +
    +
  • +

    For top-k classification tasks, cross entropy is widely used as the learning objective even though it is the optimal metric only in the limit of infinite data.

    +
  • +
  • +

    The paper introduces a family of smoothed loss functions that are specially designed for top-k optimization.

    +
  • +
  • +

    Paper

    +
  • +
  • +

    Code

    +
  • +
+ +

Idea

+ +
    +
  • Inspired by the multi-loss SVMs, a surrogate loss (lk) is introduced that creates a margin between the ground truth and the kth largest score.
  • +
+ +

Equation 1

+ +
    +
  • +

    Here s denotes the output of the classifier model to be learnt, y is the ground truth label, s[p] denotes the kth largest element of s and s\p denotes the vector s without pth element.

    +
  • +
  • +

    This lk loss has two limitations:

    + +
      +
    • +

      It is continous but not differentiable in s.

      +
    • +
    • +

      Its weak derivatives have at most 2-nonzero elements.

      +
    • +
    +
  • +
  • +

    The loss can be reformulated by adding and subtracting the k-1 largest scores of s\y and sy and by introducing a temperature parameter τ.

    +
  • +
+ +

Equation 2

+ +

Properties of L

+ +
    +
  • +

    For any τ > 0, L is infinite-differentiable and has non-sparse gradients.

    +
  • +
  • +

    Under mild conditions, L apporachs lk (in a pointwise sense) as τ approaches to 0++.

    +
  • +
  • +

    It is an upper bound on the actual loss (up to a constant factor).

    +
  • +
  • +

    It is a generalization of the cross-entropy loss for different values of k, and τ and higher margins.

    +
  • +
+ +

Computational Challenges

+ +
    +
  • +

    nCk number of terms needs to be evaluated for computing the loss for one sample (n is number of classes).

    +
  • +
  • +

    Loss L can be expressed in terms of elementary symmetric polynomials σi(e) (sum of all products of i distinct elements of vector e). Thus the challenge is to compute σk efficiently.

    +
  • +
+ +

Forward Computation

+ +
    +
  • +

    Compute σk(e) where e is a n-dimensional vector and k« n and e[i]!=0 for all i.

    +
  • +
  • +

    σi(e) can be computed using the coefficients of the polynomial (X+e1)(X+e2)…(X+en) by divide and conquer approach with polynomial multiplication.

    +
  • +
  • +

    With some more optimizations (eg log(n) levels of recursion and each level being parallelized on a GPU), the resulting algorithms scale well with n on a GPU.

    +
  • +
  • +

    Operations are performed in the log-space using the log-sum-exp trick to achieve numerical stability in single floating point precision.

    +
  • +
+ +

Backward computation

+ +
    +
  • +

    The backward pass uses optimizations like computing derivative of σj with respect to ei in a recursive manner.

    +
  • +
  • +

    Appendix of the paper describes these techniques in detail.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Experiments are performed on CIFAR-100 (with noise) and Imagenet.

    +
  • +
  • +

    For CIFAR-100 with noise, the labels are randomized with probability p (within the same top-level class).

    +
  • +
  • +

    The proposed loss function is very robust to both noise and reduction in the amount of training dataset as compared to cross-entropy loss function for both top-k and top-1 performance.

    +
  • +
diff --git a/_site/site/2019/01/02/Pre-training-Graph-Neural-Networks-with-Kernels.html b/_site/site/2019/01/02/Pre-training-Graph-Neural-Networks-with-Kernels.html new file mode 100644 index 00000000..d10533f7 --- /dev/null +++ b/_site/site/2019/01/02/Pre-training-Graph-Neural-Networks-with-Kernels.html @@ -0,0 +1,73 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a pretraining technique that can be used with the GNN architecture for learning graph representation as induced by powerful graph kernels.

    +
  • +
  • +

    Paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    Graph Kernel methods can learn powerful representations of the input graphs but the learned representation is implicit as the kernel function actually computes the dot product between the representations.

    +
  • +
  • +

    GNNs are flexible and powerful in terms of the representations they can learn but they can easily overfit if a large amount of training data is not available as is commonly the case of graphs.

    +
  • +
  • +

    Kernel methods can be used to learn an unsupervised graph representation that can be finetuned using the GNN architectures for the supervised tasks.

    +
  • +
+ +

Architecture

+ +
    +
  • +

    Given a dataset of graphs g1, g2, …, gn, use a relevant kernel function to compute k(gi, gj) for all pairs of graphs.

    +
  • +
  • +

    A siamese network is used to encode the pair of graphs into representations f(gi) and f(gj) such that dot(f(gi), f(gj)) equals k(gi, gj).

    +
  • +
  • +

    The function f is trained to learn the compressed representation of kernel’s feature space.

    +
  • +
+ +

Experiments

+ +

Datasets

+ +
    +
  • Biological node-labeled graphs representing chemical compounds - MUTAG, PTC, NCI1
  • +
+ +

Baselines

+ +
    +
  • DGCNN
  • +
  • Graphlet Kernel (GK)
  • +
  • Random Walk Kernel
  • +
  • Propogation Kernel
  • +
  • Weisfeiler-Lehman subtree kernel (WL)
  • +
+ +

Results

+ +
    +
  • +

    Pretraining uses the WL kernel

    +
  • +
  • +

    Pretrained model performs better than the baselines for 2 datasets but lags behind WL method (which was used for pretraining) for the NCI1 dataset.

    +
  • +
+ +

Notes

+ +
    +
  • The idea is straightforward and intuitive. In general, this kind of pretraining should help the downstream model. It would be interesting to try it on more datasets/kernels/GNNs so that more conclusive results can be obtained.
  • +
diff --git a/_site/site/2019/01/08/Efficient-Lifelong-Learning-with-A-GEM.html b/_site/site/2019/01/08/Efficient-Lifelong-Learning-with-A-GEM.html new file mode 100644 index 00000000..be4bbda1 --- /dev/null +++ b/_site/site/2019/01/08/Efficient-Lifelong-Learning-with-A-GEM.html @@ -0,0 +1,164 @@ +

Contributions

+ +
    +
  • +

    A new (and more realistic) evaluation protocol for lifelong learning where each data point is observed just once and a disjoint set of tasks are used for training and validation.

    +
  • +
  • +

    A new metric that focuses on the efficiency of the models - in terms of sample complexity and computational (and memory) costs.

    +
  • +
  • +

    Modification of Gradient Episodic Memory ie GEM which reduces the computational overhead of GEM without compromising on the results.

    +
  • +
  • +

    Empirical validation that using task descriptors help lifelong learning models and improve their few-shot learning capabilities.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code

    +
  • +
+ +

Learning Protocol

+ +
    +
  • +

    Two group of datasets - one for training and evaluation (DEV) and other for cross validation (DCV).

    +
  • +
  • +

    Data can be sampled multiple times for cross-validation dataset but only once from the training dataset.

    +
  • +
  • +

    Each group of dataset (say DEV or DCV) is a list of task-specific datasets Dk (k is the task index).

    +
  • +
  • +

    Each sample in Dk is of the form (x, t, y) where x is the data, t is the task descriptor and y is the output.

    +
  • +
  • +

    Dk contains Bk minibatches of data.

    +
  • +
+ +

Metrics

+ +

Accuracy

+ +
    +
  • +

    ak,i,j = accuracy on test task j after training on ith minibatch of training task k.

    +
  • +
  • +

    Ak = mean over all j = 1 to k (ak, Bk, j) ie train the model on data for task k and then test it on all the tasks.

    +
  • +
+ +

Forgetting Measure

+ +
    +
  • +

    fjk = forgetting on task j after training on all minibatches upto task k.

    +
  • +
  • +

    fjk = max over all l = 1 to k-1 (al, Blj - ak, Bkj)

    +
  • +
  • +

    Forgetting = Fk = mean over all j = 1 to k-1 (fjk)

    +
  • +
+ +

LCA - Learning Curve Area

+ +
    +
  • +

    Zb = average b shot performance where b is the minibatch number.

    +
  • +
  • +

    Zb = mean over all k = 0 to T (ak, b, k)

    +
  • +
  • +

    LCAβ = mean over all b = 0 to β (Zb)

    +
  • +
  • +

    One special case is LCA0 which is the forward transfer performance or performance on the unseen task.

    +
  • +
  • +

    In experiments, β is kept small as we want the model to learn from few examples.

    +
  • +
+ +

Model

+ +
    +
  • +

    GEM has been shown to be very effective in single epoch setting but introduces a very high computational overhead.

    +
  • +
  • +

    Average GEM (AGEM) reduces this overhead by sampling (and using) only some examples from the episodic memory instead of using all the examples.

    +
  • +
  • +

    While GEM provides better guarantees in terms of worst-case forgetting, AGEM provides better guarantees in terms of average accuracy.

    +
  • +
+ +

Joint Embedding Model Using Compositional Task Descriptors

+ +
    +
  • +

    Compositional Task Descriptors are used to speed training on the subsequent tasks.

    +
  • +
  • +

    A matrix specifying the attribute value of objects (to be recognized in the task) are used.

    +
  • +
  • +

    A joint-embedding space between image features and attribute embeddings is learned.

    +
  • +
+ +

Experiments

+ +

Datasets

+ + + +

Setup

+ + + +

Results

+ +
    +
  • +

    AGEM outperforms other models on all the datasets expect MNIST where the Progressive Neural Networks lead. One reason could be that MNIST has a large number of training examples per task. But Progressive Neural Networks lead to bad utilization of capacity.

    +
  • +
  • +

    While AGEM and GEM have similar performance, GEM has a much higher computational and memory overhead.

    +
  • +
  • +

    Use of task descriptors improves the accuracy for all the models.

    +
  • +
  • +

    It seems that AGEM offers a good tradeoff between average accuracy performance and efficiency - in terms of sample efficiency, memory requirements and computational costs.

    +
  • +
diff --git a/_site/site/2019/01/15/Hierarchical-RL-Using-an-Ensemble-of-Proprioceptive-Periodic-Policies.html b/_site/site/2019/01/15/Hierarchical-RL-Using-an-Ensemble-of-Proprioceptive-Periodic-Policies.html new file mode 100644 index 00000000..46155f45 --- /dev/null +++ b/_site/site/2019/01/15/Hierarchical-RL-Using-an-Ensemble-of-Proprioceptive-Periodic-Policies.html @@ -0,0 +1,107 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a simple and robust approach for hierarchically training an agent in the sparse reward setup.

    +
  • +
  • +

    The broad idea is to train low-level primitives that are sufficiently diverse (so that they can be composed for solving higher level tasks) and to train a high level primitive that learns to combine these primitives for any given downstream task.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • The state can be divided into two components: proprioceptive states sp (measurement of agent’s own body that can be directly controlled by the agent) and the external states se/
  • +
+ +

Low-Level Policy Training

+ +
    +
  • +

    Low-level policies should be:

    + +
      +
    • Diverse: should cover all the skills that the agent might have to perform.
    • +
    • Effective: can make significant changes to the environment.
    • +
    • Controllable: easy for high-level policies to use and control
    • +
    +
  • +
  • +

    For the low-level policy, the per-time step reward is directly proportional to change in the external state. The same reward is used for all the agents and environments(except regulated with environment specific controls and survival rewards).

    +
  • +
+ +

Phase conditioned policies

+ +
    +
  • +

    Good movement policies are expected to be at least roughly periodic and phase input (or time index) is used to achieve periodicity.

    +
  • +
  • +

    Phase conditioned policy (=f(sp, φ)) where φ = {0, 1, …, k-1} is the phase index.

    +
  • +
  • +

    At each timestep t, the model receives observation sp and phase index φ = t%k. The phase index is represented by a vector bφ.

    +
  • +
  • +

    For phase conditioned policies, the agent state and actions are encouraged to be cyclic with the help of a cyclic loss.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Environments: Ant and Humanoid from Mujoco.

    +
  • +
  • +

    Low-level control:

    + +
      +
    • Using phase-conditioning is helpful when training low-level primitives.
    • +
    +
  • +
  • +

    High-level control:

    + +
      +
    • +

      Cross Maze Environment with fixed goals

      + +
        +
      • +

        3 goals along 3 paths

        +
      • +
      • +

        Proposed method converges faster and to a smaller final distance to the goal showing that it is both efficient and consistent (with smaller variance across random seeds).

        +
      • +
      +
    • +
    • +

      Random Goal Maze

      + +
        +
      • +

        The goal is randomly drawn from a set of goals.

        +
      • +
      • +

        “Cross” (shaped) maze and “skull” (shaped) mazes are considered.

        +
      • +
      • +

        Even with velocity rewards and pretraining on low-level objectives (which can be thought of as exploration bonuses), the baseline fails to get close to the goal locations while the proposed model reach the goal most of the times.

        +
      • +
      • +

        The main results are reported using PPO though repeating the experiments with A2C and DQN show that the idea is fairly robust.

        +
      • +
      • +

        The paper reported that in their experiments, finetuning the lower level primitives did not help much though it might not be the case of other environments.

        +
      • +
      +
    • +
    +
  • +
diff --git a/_site/site/2019/01/22/Modular-meta-learning.html b/_site/site/2019/01/22/Modular-meta-learning.html new file mode 100644 index 00000000..52d61eef --- /dev/null +++ b/_site/site/2019/01/22/Modular-meta-learning.html @@ -0,0 +1,196 @@ +

Introduction

+ +
    +
  • +

    The paper proposes an approach for learning neural networks (modules) that can be combined in different ways to solve different tasks (combinatorial generalization).

    +
  • +
  • +

    The proposed model is called as BOUNCEGRAD.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code

    +
  • +
+ +

Setup

+ +
    +
  • +

    Focuses on supervised learning.

    +
  • +
  • +

    Task distribution p(T).

    +
  • +
  • +

    Each task is a joint distribution pT(x, y) over (x, y) data pairs.

    +
  • +
  • +

    Given data from m meta-training tasks, and a meta-test task, find a hypothesis h which performs well on the unseen data drawn from the meta-test task.

    +
  • +
+ +

Structured Hypothesis

+ +
    +
  • +

    Given a compositional scheme C, a set of modules F1, …, Fk (represented as a whole by F) and the set of their respective parameters θ1, …, θk (represented as a whole by θ), (C, F, θ) represents the set of possible functional input-output mappings. These mappings form the hypothesis space.

    +
  • +
  • +

    A structured hypothesis model is specified by what modules to use and their parametric forms (but not the values).

    +
  • +
+ +

Examples of compositional schemes

+ +
    +
  • +

    Choosing a single module for the task at hand.

    +
  • +
  • +

    Fixed compositional structure but different modules selected every time.

    +
  • +
  • +

    Weight ensemble (maybe using attention mechanism)

    +
  • +
  • +

    General function composition tree

    +
  • +
+ +

Phases

+ +
    +
  • +

    Offline Meta Learning Phase:

    + +
      +
    • +

      Take training and validation dataset for the first k tasks and generate a parameterization for each module θ1, …, θk.

      +
    • +
    • +

      The hypothesis (or composition) to use comes from the online meta-test learning phase.

      +
    • +
    • +

      In this stage, find the best θ given a structure.

      +
    • +
    +
  • +
  • +

    Online Meta-test Learning Phase

    + +
      +
    • +

      Given a hypothesis space and θ, the output is a compositional form (or hypothesis) that specifies how to compose the models.

      +
    • +
    • +

      In this stage, find the best structure, given a hypothesis space and θ.

      +
    • +
    +
  • +
+ +

Learning Algorithm

+ +
    +
  • +

    During Meta-test learning phase, simulated annealing is used to find the optimal structure, with temperature T decreased over time.

    +
  • +
  • +

    During meta-learning phrase, the actual objective function is replaced by a surrogate, smooth objective function (during the search step) to avoid local minima.

    +
  • +
  • +

    Once a structure has been picked, any gradient descent based approach can be used to optimize the modules.

    +
  • +
  • +

    Basically the state of optimization process comprises of the parameters and the temperature. Together, they are used to induce a distribution over the structures. Given a structure, θ is optimized and T is annealed over time.

    +
  • +
  • +

    The learning procedure can be improved upon by performing parameter tuning during the online (meta-test learning) phase as well. the resulting approach is referred to as MOMA - MOdular MAml.

    +
  • +
+ +

Experiments

+ +

Approaches

+ +
    +
  • +

    Pooled - Single network using combined data of all the tasks.

    +
  • +
  • +

    MAML - Single network using MAML

    +
  • +
  • +

    BOUNCEGRAD - Modular Network without MAML adaptation in online learning.

    +
  • +
  • +

    MOMA - BOUNCEGRAD with MAML adaptation in online learning.

    +
  • +
+ +

Domains

+ +

Simple Functional Relationships

+ +
    +
  • +

    Sine-function prediction problem

    +
  • +
  • +

    In general, MOMA outperforms other models.

    +
  • +
  • +

    With a small amount of online training data, BOUNCEGRAD outperforms other models as it has a better structural prior.

    +
  • +
+ +

Predicting next frame of a kinematic skeleton (motion capture data)

+ +
    +
  • +

    11 different objects (with different shapes) on 4 surfaces with different friction properties.

    +
  • +
  • +

    2 meta-learning scenarios are considered. In the first case, the object-surface combination in the test case was present in some meta-training tasks and in the other case, it was not present.

    +
  • +
  • +

    For previously seen combinations, MOMA performs the best followed by BOUNCEGRAD and MAML.

    +
  • +
  • +

    For unseen combinations, all the 3 are equally good.

    +
  • +
  • +

    Compositional scheme is the attention mechanism.

    +
  • +
  • +

    An interesting result is that the modules seem to specialize (and activate more often) based on the shape of the object.

    +
  • +
+ +

Predicting next frame of a kinematic selection (using motion capture data)

+ +
    +
  • +

    Composition Structure - generating kinematics subtrees for each body part (2 legs, 2 arms, 2 torsi).

    +
  • +
  • +

    Again 2 setups are used - one where all activities in the training and the meta-test task are shared while the other setup where the activities are not shared.

    +
  • +
  • +

    For known activities MOMA and BOUNCEGRAD perform the best while for unknown activities, MOMS performs the best.

    +
  • +
+ +

Notes

+ +
    +
  • +

    While the approach is interesting, maybe a more suitable set of tasks (from the point of composition) would be more convincing.

    +
  • +
  • +

    It would be useful to see the computational tradeoff between MAML, BOUNCEGRAD, and MOMA.

    +
  • +
diff --git a/_site/site/2019/01/29/Diversity-is-All-You-Need-Learning-Skills-without-a-Reward-Function.html b/_site/site/2019/01/29/Diversity-is-All-You-Need-Learning-Skills-without-a-Reward-Function.html new file mode 100644 index 00000000..a8f0f7d0 --- /dev/null +++ b/_site/site/2019/01/29/Diversity-is-All-You-Need-Learning-Skills-without-a-Reward-Function.html @@ -0,0 +1,104 @@ +

Introduction

+ +
    +
  • +

    The paper proposes an approach to learn useful skills without a reward function by maximizing an information theoretic objective by using a maximum entropy policy.

    +
  • +
  • +

    Skills are defined as latent-conditioned policies that alter the state of the environment in a consistent way.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code

    +
  • +
+ +

Setup

+ +
    +
  • Unsupervised “exploration” stage followed by supervised stage.
  • +
+ +

Desirable Qualities of Skills

+ +
    +
  • +

    Skills should dictate the states that the agent visits. Different skills should visit different states to be distinguishable.

    +
  • +
  • +

    States (not actions) should be used to distinguish between skills as not all actions change the state (for the outside observer).

    +
  • +
  • +

    Skills are encouraged to be diverse and “exploratory” by learning skills that act randomly (have high entropy).

    +
  • +
+ +

Loss Formulation

+ +
    +
  • +

    (S, A) - state and action

    +
  • +
  • +

    z ~ p(z) - latent variable to condition the policy.

    +
  • +
  • +

    Skill - policy conditioned on a fixed z.

    +
  • +
  • +

    Objective is to maximize the mutual information between skill and state (MI(A; Z)) ie skill should control which state is visited or the skill should be inferrable from the state visited.

    +
  • +
  • +

    Simultaneously minimize the mutual information between skills and actions given the state to ensure that the state (and not the action) is used to distinguish the skills.

    +
  • +
  • +

    Maximize the entropy of the mixture of policies (p(z) and all the skills).

    +
  • +
+ +

Implementation

+ +
    +
  • +

    Policy π(a | s, z)

    +
  • +
  • +

    Task reward replaced by the pseduoreward logqφ(z | s) - log(p(z)).

    +
  • +
  • +

    During unsupervised training, z is sampled at the start of the episode and then not changed during the episode.

    +
  • +
  • +

    Learning agent gets rewards for visiting the states that are easy to discriminate while the discriminator updated to correctly predict z from the states visited.

    +
  • +
+ +

Observations

+ +

Analysis of Learned Skills

+ +
    +
  • +

    The agent learns a diverse set of primitive behaviors for all tasks ranging from 2 DoF to 111 DoF.

    +
  • +
  • +

    for inverted pendulum and mountain car, the skills become increasingly diverse throughout training.

    +
  • +
  • +

    Use of uniform prior, in place of a learned prior, for p(z) allows for discovery of more diverse skills.

    +
  • +
  • +

    The proposed approach can be used as a pretraining technique where the best-performing primitives (from unsupervised training) can be finetuned with the task-specific rewards.

    +
  • +
  • +

    The discovered skills can be used for hierarchical RL by learning a meta-policy(which chooses the skill to execute for k steps).

    +
  • +
  • +

    Modifying the discriminator in the proposed formulation can be used to bias DIAYN towards discovering a particular type of policies. This provides a mechanism for incorporating “supervision” in the learning setup.

    +
  • +
  • +

    The “discovered” primitives can also be used for imitation learning.

    +
  • +
diff --git a/_site/site/2019/02/05/Linguistic-Knowledge-as-Memory-for-Recurrent-Neural-Networks.html b/_site/site/2019/02/05/Linguistic-Knowledge-as-Memory-for-Recurrent-Neural-Networks.html new file mode 100644 index 00000000..8ed339f8 --- /dev/null +++ b/_site/site/2019/02/05/Linguistic-Knowledge-as-Memory-for-Recurrent-Neural-Networks.html @@ -0,0 +1,59 @@ +
    +
  • +

    Link to the paper

    +
  • +
  • +

    Training RNNs to model long term dependencies is difficult but in some cases, the information about dependencies between elements (of the sequence) may be present in the form of symbolic knowledge.

    +
  • +
  • +

    For example, when encoding sentences, coreference, and hypernymy relations can be extracted between tokens.

    +
  • +
  • +

    These elements(tokens) can be connected with each other with different kind of edges resulting in the graph data structure.

    +
  • +
  • +

    One approach could be to model this knowledge(encoded in the graph) using a graph neural network (GNN).

    +
  • +
  • +

    The authors prefer to encode the information into 2 DAGs (via topological sorting) as training the GNN could add some extra overhead.

    +
  • +
  • +

    This results into the Memory as Acyclic Graph Encoding RNN (MAGE-RNN) architecture. Its GRU version is referred to as MAGE-GRU.

    +
  • +
  • +

    Given an input sequence of tokens [x1, x2, …, xT] and information about which tokens relate to other tokens, a graph G is constructed with different (possibly typed) edges.

    +
  • +
  • +

    Given the graph G, two DFS orderings are computed - forward DFS and backward DFS.

    +
  • +
  • +

    MAGE-RNN uses separate networks for accessing the forward and backward DFS orders.

    +
  • +
  • +

    A separate hidden state is maintained for each entity type to separate memory content from addressing.

    +
  • +
  • +

    For any DFS order (forward or backward), the representation at time t is given as the concatenation of representation of different edge types at that time.

    +
  • +
  • +

    The hidden states (for different edge types at time t) are updated in the topological order using the current state of all incoming edges at xt.

    +
  • +
  • +

    The representation of the DFS order is given as the sequence of all the previous representations.

    +
  • +
  • +

    In some cases, elements across multiple sequences could be related to each other. In that case, the graph is decomposed into a collection of DAGs and use MAGE-GRU on the DAGs by taking one random permutation of the sequences and decomposing it into the forward and the backward graphs.

    +
  • +
  • +

    The model is evaluated on the task of text comprehension with coreference on bAbi dataset (story based QA), LAMBADA dataset (broad context language modeling) and CNN dataset (cloze-style QA).

    +
  • +
  • +

    MAGE-GRU was used as a replacement for GRU units in bi-directional GRUs and GA-Reader architecture.

    +
  • +
  • +

    DAG-RNN and shared version of MAGE-GRU (with shared edge types) are the other baselines.

    +
  • +
  • +

    For all the cases, the model with MAGE-GRU works the best.

    +
  • +
diff --git a/_site/site/2019/02/19/TuckER-Tensor-Factorization-for-Knowledge-Graph-Completion.html b/_site/site/2019/02/19/TuckER-Tensor-Factorization-for-Knowledge-Graph-Completion.html new file mode 100644 index 00000000..08bbde0c --- /dev/null +++ b/_site/site/2019/02/19/TuckER-Tensor-Factorization-for-Knowledge-Graph-Completion.html @@ -0,0 +1,134 @@ +

Introduction

+ +
    +
  • +

    TuckER is a simple, yet powerful linear model that uses Tucker decomposition for the task of link prediction in knowledge graphs.

    +
  • +
  • +

    Paper

    +
  • +
  • +

    Implementation

    +
  • +
+ +

Knowledge Graph as a Tensor

+ +
    +
  • +

    Let E be the set of all the entities and R be the set of all the relations in a given knowledge graph (KG).

    +
  • +
  • +

    The KG can be represented as a list of triples of the form (source entity, relation, object entity) or (es, r, eo).

    +
  • +
  • +

    The list of triples can be represented as a third-order tensor (of binary values) where each element corresponds to a triple and each element’s value corresponds to ether that element is present in the KG or not.

    +
  • +
  • +

    The link prediction task can be formulated as - given a set of all triples, learn a scoring function that assigns a score to each triple. The score indicates whether the triple is actually present in the KG or not.

    +
  • +
+ +

TuckER Decomposition

+ +
    +
  • +

    Tucker decomposition factorizes a tensor into a set of factor matrices and a smaller core tensor.

    +
  • +
  • +

    In the specific case of three-mode tensors (alternate representation of a KG), the given original tensor X (of shape IxJxK) can be factorized into a core tensor W (of shape PxQxR) and 3 factor matrics - A (of shape IxP), B (of shape JxQ) and C (of shape KxR) such that X is approximately W x1 A x2 B x3 C, where Xn denotes the tensor product along the nth mode.

    +
  • +
  • +

    Generally, P, Q, R are smaller than I, J K (respectively) and W can be seen as a compressed version of X.

    +
  • +
+ + + +
    +
  • +

    Two embedding matrics are used for embedding the entities and the relations respectively.

    +
  • +
  • +

    Entity embedding matrix E is shared for both subject and the object ie E = A = B.

    +
  • +
  • +

    The scoring function is gives as W x1 es x2 wr x3 e0 where es, wr and eo are the embedding vectors corresonding to es, er and eo respectively. Note that both the core tensor and the factor matrices are to be learnt.

    +
  • +
  • +

    Model is trained with the standard negative log-likelihood loss given as (for one triple): y * log(p) + (1-y) * log(1-p)

    +
  • +
  • +

    To speed up training and increase accuracy, 1-N scoring is used. A given (es, r) is simultaneously scored for all the entities using the local-closed world assumption (knowledge graph is only locally complete).

    +
  • +
  • +

    Handling asymmetric relations is straightforward by learning a relation embedding alongside a relation-agnostic core tensor which enables knowledge sharing across relations.

    +
  • +
+ +

Theoretical Analysis

+ +
    +
  • +

    One important consideration would be the expressive power of TuckER models, especially in relation to other models like ComplEx and SimplE.

    +
  • +
  • +

    It can be shown the TuckER is fully expressive ie give any ground truth over E and R, there exists a TuckER model which can perfectly represent the data - using 1-hot entity and relation embedding.

    +
  • +
  • +

    For full expressiveness, dimensionality of entity (relation) is nE (nR) where nE (nR) are the number of entities (relations). In comparsion, the required dimensionality for ComplEx is nE * nR (for both entity and relations) and for SimplE, it is min(E * nR, number of facts + 1) (for both entity and relations).

    +
  • +
  • +

    Many existing models like RESCAL, DistMult, ComplEx, SimplE etc can be seen as special cases of TuckER.

    +
  • +
+ +

Experiments

+ +

Datasets

+ +
    +
  • +

    FB15k, FB15k-237, WN18, WN18RR

    +
  • +
  • +

    The max number of entities is around 41K and max number of relations is around 1.3K

    +
  • +
+ +

Implementation

+ +
    +
  • BatchNorm, Dropout and Learning rate decay are used.
  • +
+ +

Metrics

+ +
    +
  • +

    Mean Reciprocal Rank (MRR) - the average of the inverse of mean rank assigned to the true triple overall ne generated triples.

    +
  • +
  • +

    hits@k (k = 1, 3, 10) - percentage of times the true triple is ranked in the top k of the ne generated triples.

    +
  • +
  • +

    Higher is better for both the metrics.

    +
  • +
+ +

Results

+ +
    +
  • +

    TuckER outperforms all the baseline models on all but one task.

    +
  • +
  • +

    Dropout is an important factor with higher dropout rates (0, 3, 0.4, 0.5) needed for datasets with fewer training examples per relation (hence more prone to overfitting).

    +
  • +
  • +

    TuckER improves performance more significantly when the number of relations is large.

    +
  • +
  • +

    Even with lower embedding dimensions, TuckER’s performance does not deteriorate as much as other models.

    +
  • +
diff --git a/_site/site/2019/03/12/Model-Primitive-Hierarchical-Lifelong-Reinforcement-Learning.html b/_site/site/2019/03/12/Model-Primitive-Hierarchical-Lifelong-Reinforcement-Learning.html new file mode 100644 index 00000000..2d1a0dc7 --- /dev/null +++ b/_site/site/2019/03/12/Model-Primitive-Hierarchical-Lifelong-Reinforcement-Learning.html @@ -0,0 +1,164 @@ +

Introduction

+ +
    +
  • +

    The paper presents a framework that uses diverse suboptimal world models that can be used to break complex policies into simpler and modular sub-policies.

    +
  • +
  • +

    Given a task, both the sub-policies and the controller are simultaneously learned in a bottom-up manner.

    +
  • +
  • +

    The framework is called as Model Primitive Hierarchical Reinforcement Learning (MPHRL).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    Instead of learning a single transition model of the environment (aka world model) that can model the transitions very well, it is sufficient to learn several (say k) suboptimal models (aka model primitives).

    +
  • +
  • +

    Each model primitive will be good in only a small part of the state space (aka region of specialization).

    +
  • +
  • +

    These model primitives can then be used to train a gating mechanism for selecting sub-policies to solve a given task.

    +
  • +
  • +

    Since these model primitives are sub-optimal, they are not directly used with model-based RL but are used to obtain useful functional decompositions and sub-policies are trained with model-free approaches.

    +
  • +
+ +

Single Task Learning

+ +
    +
  • +

    A gating controller is trained to choose the sub-policy whose model primitive makes the best prediction.

    +
  • +
  • +

    This requires modeling p(Mk | st, at, st+1) where p(Mk) denotes the probability of selecting the kth model primitive. This is hard to compute as the system does not have access to st+1 and at at time t before it has choosen the sub-policy.

    +
  • +
  • +

    Properly marginalizing st+1 and at would require expensive MC sampling. Hence an approximation is used and the gating controller is modeled as a categorical distribution - to produce p(Mk | st). This is trained via a conditional cross entropy loss where the ground truth distribution is obtained from transitions sampled in a rollout.

    +
  • +
  • +

    The paper notes that technique is biased but reports that it still works for the downstream tasks.

    +
  • +
  • +

    The gating controller composes the sub-policies as a mixture of Gaussians.

    +
  • +
  • +

    For learning, PPO algorithm is used with each model primitives gradient weighted by the probability from the gating controller.

    +
  • +
+ +

Lifelong Learning

+ +
    +
  • Different tasks could share common subtasks but may require a different composition of subtasks. Hence, the learned sub-policies are transferred across tasks but not the gating controller or the baseline estimator (from PPO).
  • +
+ +

Experiments

+ +
    +
  • +

    Domains:

    + +
      +
    • +

      Mujoco ant navigating different mazes.

      +
    • +
    • +

      Stacker arm picking up and placing different boxes.

      +
    • +
    +
  • +
  • +

    Implementation Details:

    + +
      +
    • +

      Gaussian subpolicies

      +
    • +
    • +

      PPO as the baseline

      +
    • +
    • +

      Model primitives are hand-crafted using the true next state provided by the environment simulator.

      +
    • +
    +
  • +
  • +

    Single Task

    + +
      +
    • +

      Only maze task is considered with the start position (of the ant) and the goal position is fixed.

      +
    • +
    • +

      Observation includes distance from the goal.

      +
    • +
    • +

      Forcing the agent to decompose the problem, when a more direct solution may be available, causes the sample complexity to increase on one task.

      +
    • +
    +
  • +
  • +

    Lifelong Learning

    + +
      +
    • +

      Maze

      + +
        +
      • +

        10 random Mujoco ant mazes used as the task distribution.

        +
      • +
      • +

        MPHRL takes almost twice the number of steps (as compared to PPO baseline) to solve the first task but this cost gets amortized over the distribution and the model takes half the number of steps as compared to the baseline (summed over the 10 tasks).

        +
      • +
      +
    • +
    • +

      Pick and Place

      + +
        +
      • +

        8 Pick and Place tasks are created with max 3 goal locations.

        +
      • +
      • +

        Observation includes the position of the goal.

        +
      • +
      +
    • +
    +
  • +
  • +

    Ablations

    + +
      +
    • +

      Overlapping model primitives can degrade the performance (to some extent). Similarly, the performance suffers when redundant primitives are introduced indicating that the gating mechanism is not very robust.

      +
    • +
    • +

      Sub-policies could quickly adapt to the previous tasks (on which they were trained initially) despite being finetuned on subsequent tasks.

      +
    • +
    • +

      The order of tasks (in the 10-Mazz task) does not degrage the performance.

      +
    • +
    • +

      Transfering the gating controller leads to negative transfer.

      +
    • +
    +
  • +
  • +

    Notes

    + +
      +
    • I think the biggest strength of the work is that accurate dynamics model are not needed (which are hard to train anyways!) through the experimental results are not conclusive given the limited number of domains on which the approach is tested.
    • +
    +
  • +
diff --git a/_site/site/2019/03/16/To-Tune-or-Not-to-Tune-Adapting-Pretrained-Representations-to-Diverse-Tasks.html b/_site/site/2019/03/16/To-Tune-or-Not-to-Tune-Adapting-Pretrained-Representations-to-Diverse-Tasks.html new file mode 100644 index 00000000..f7927cef --- /dev/null +++ b/_site/site/2019/03/16/To-Tune-or-Not-to-Tune-Adapting-Pretrained-Representations-to-Diverse-Tasks.html @@ -0,0 +1,146 @@ +
    +
  • +

    Link to the paper

    +
  • +
  • +

    The paper provides useful empirical advice for adapting pretrained language models for a given target task.

    +
  • +
  • +

    Pre-trained models considered

    + +
      +
    • +

      ELMo

      +
    • +
    • +

      BERT

      +
    • +
    +
  • +
  • +

    Tasks considered

    + +
      +
    • +

      Named Entity Recognition (NER) - CoNLL 2003 dataset

      +
    • +
    • +

      Sentiment Analysis (SA) - Stanford Sentiment Treebank (SST-2) dataset

      +
    • +
    • +

      Natural Language Inference (NLI) - MultiNLI and Sentences Involving Compositional Knowledge (SICK-E) dataset

      +
    • +
    • +

      Paraphrase Detection (PD) - Microsoft Research Paraphrase Corpus (MRPC)

      +
    • +
    • +

      Semantic Textual Similarity (STS) - Semantic Textual Similarity Benchmark (STS-B) and SICK-R

      +
    • +
    • +

      The last 3 tasks (NLI, PD, STS) are defined for sentence pairs.

      +
    • +
    +
  • +
  • +

    Adaptation Strategies

    + +
      +
    • +

      Feature Extraction

      + +
        +
      • +

        The pretrained model is only used for extracting features and its weights are kept fixed.

        +
      • +
      • +

        For both ELMo and BERT, the contextual representation of the words from all the layers are extracted.

        +
      • +
      • +

        A weighted combination of these layers is used as an input to the task-specific model.

        +
      • +
      • +

        Task-specific models

        + + +
      • +
      +
    • +
    • +

      Fine-tuning

      + +
        +
      • +

        The pretrained model is finetuned on the target task.

        +
      • +
      • +

        Task-specific models for ELMO

        + +
          +
        • +

          NER - CRF on top of LSTM states

          +
        • +
        • +

          SA - Max-pool over the language model states followed by a softmax layer

          +
        • +
        • +

          NLI, PD, STS - cross sentence bi-attention between the language model states followed by pooling and softmax layer.

          +
        • +
        +
      • +
      • +

        Task-specific models for BERT

        + +
          +
        • +

          NER - Extract representation of the first-word piece of each token followed by the softmax layer

          +
        • +
        • +

          SA, NLI, PD, STS - standard BERT training

          +
        • +
        +
      • +
      +
    • +
    +
  • +
  • +

    Main observations

    + +
      +
    • +

      Feature extraction and fine-tuning have comparable performance in most cases unless the two tasks are highly similar(fine-tuning is better) or highly dissimilar (feature extraction is better).

      +
    • +
    • +

      For ELMo, feature extraction consistently outperforms fine-tuning for the sentence pair tasks (NLI, PD, STS). The reverse trend is observed for BERT with fine-tuning being better on sentence pair tasks.

      +
    • +
    • +

      Adding extra parameters is helpful for feature extraction but not fine-tuning.

      +
    • +
    • +

      ELMo fine-tuning requires careful tuning and other tricks like triangular learning rates, gradual unfreezing and discriminative fine-tuning.

      +
    • +
    • +

      For the tasks considered, there is no correlation observed between the distance of the source and target domains and adaptation performance.

      +
    • +
    • +

      Training a diagnostic classifier (on the intermediate representations) suggests that fine-tuning improves the performance of the classifier at all the intermediate layers (which is sort of expected).

      +
    • +
    • +

      In terms of mutual information estimates, fine-tuned representations have a much higher mutual information as compared to the feature extraction based representations.

      +
    • +
    • +

      Knowledge for single sentence tasks seems to be mostly concentrated in the last layers while for pair classification tasks, the knowledge seems gradually build un in the intermediate layers, all the way up to the last layer.

      +
    • +
    +
  • +
diff --git a/_site/site/2019/03/26/GNN-Explainer-A-Tool-for-Post-hoc-Explanation-of-Graph-Neural-Networks.html b/_site/site/2019/03/26/GNN-Explainer-A-Tool-for-Post-hoc-Explanation-of-Graph-Neural-Networks.html new file mode 100644 index 00000000..32c28481 --- /dev/null +++ b/_site/site/2019/03/26/GNN-Explainer-A-Tool-for-Post-hoc-Explanation-of-Graph-Neural-Networks.html @@ -0,0 +1,200 @@ +

Introduction

+ +
    +
  • +

    Graph Neural Network (GNN) is a family of powerful machine learning (ML) models for graphs that can combine node information with the structural information.

    +
  • +
  • +

    One downside of GNNs is that their predictions are hard to interpret.

    +
  • +
  • +

    The paper proposes GNN Explainer model for solving the problem of interpretability.

    +
  • +
  • +

    Paper

    +
  • +
+ +

Desiderata for GNN explanations

+ +
    +
  • +

    Local edge fidelity - identify the subgraph structure (ideally the smallest) that significantly affected the predictions of the GNN. ie identify the important edges in the graph (for a given prediction).

    +
  • +
  • +

    Local node fidelity - identify the import node features and correlations in the features of the neighboring nodes.

    +
  • +
  • +

    Single instance and multi-instance explanations - Support both single instance prediction tasks and multi-instance prediction tasks.

    +
  • +
  • +

    Model Agnostic - Support a large family of models (ideally all)

    +
  • +
  • +

    Task Agnostic - Support a large family of tasks (ideally all)

    +
  • +
+ +

Approach

+ +
    +
  • +

    I first describe the single instance prediction case and use that as the base to describe the multiple instance prediction cases. All the discussion in this section assumes a single instance prediction task.

    +
  • +
  • +

    Input: Trained GNN, a single instance whose prediction is to be explained.

    +
  • +
  • +

    Task: Identify the small subgraph and the small subset of features that explain the prediction.

    +
  • +
  • +

    Idea: Maximize the mutual information (MI) between the GNN and the explanation by learning a graph mask which can be used for selecting the relevant subgraph (from the GNN’s computational graph) and features (from all layers of the GNN).

    +
  • +
  • +

    Computational graph of GNN (corresponding to a node) refers to the approx L-hop neighborhood of the node in the graph ie the subgraph formed by nodes and edges whose representation affected the representation of the given node.

    +
  • +
+ +

Single-Instance Explanations

+ +
    +
  • +

    For a node v, the information used to predict its label y is completely described by its computation graph Gc(v) and the associated feature set Xc(v). The feature set includes the features of all the nodes in the computation graph.

    +
  • +
  • +

    When constructing the explaination, only Gc(v) and Xc(v) are used.

    +
  • +
  • +

    The task can be reformulated as identifying a subgraph GS (subset of Gc(v)) with associated features XS which are important when predicting the label y for node v.

    +
  • +
  • +

    “Importance” is measured in terms of MI

    +
  • +
+ +

MI(Y, (GS, XS)) = H(Y) - H(Y | G = GS, X = XS) where H is the entropy and Y is a random variable representing the prediction.

+ +
    +
  • +

    A further constraint, | GS| < k is imposed to obtain consise explaintations.

    +
  • +
  • +

    Since H(Y) is fixed (recall that the network has already been trained and is now being used in the inference mode), maximizing MI is equivalent to minimizing the conditional entropy H(Y | G = GS, X = XS)

    +
  • +
  • +

    This is equivalent to selecting the subgraph that minimizes the uncertainty in the prediction of y when the computational graph is Gc(v)

    +
  • +
+ +

Optimiation Process

+ +
    +
  • +

    Given the exponentially large number of possible subgraphs, we can not directly optimize the given equation.

    +
  • +
  • +

    A “relaxed”-adjacency matrix (whose values are real numbers in the range 0 to 1) is introduced where each element of this fractional adjacency matrix is smaller than the corresponding element of the original adjacency matrix. Gradient descent can be performed on this adjacency matrix.

    +
  • +
  • +

    The “relaxed” GS can be interpreted as a variational approximation of the subgraph distributions of Gc(v) and the objective can be written as min EGSH(Y | G = GS, X = XS)

    +
  • +
  • +

    Now the paper makes a big approximation that the GNN is convex so as to leverage the Jensen inequality and push the expectation inside the entropy term to get an upper bound and then minimize that ie min H(Y | G = Es[GS], X = XS)

    +
  • +
  • +

    The paper reports that the convexity approximation (along with discreteness constraint) works in practice.

    +
  • +
  • +

    Next, mean field approximation is used to decompose P(GS) as a multivariate Bernoulli distrbitution ie product of AS(i, j) for all (i, j) belonging to Gc(v). AS can be optimized directly and its values represent the expectation of the Bernoulli distrbitution on wether the edge ei, j exists.

    +
  • +
  • +

    Given the constraints on AS, it is easier to learn a mask matrix M and optimize that such that AS = M * Ac* Additionally, the sigmod operator can be applied on M.

    +
  • +
  • +

    Once M is learned, only the top k values are retained.

    +
  • +
+ +

Including Node Features in the Explanation

+ +
    +
  • +

    Similar to the previous approach, another feature mask is learned (either one for entire GNN or one per node of the GNN) and is used as a feature selector.

    +
  • +
  • +

    The mask could either be learned such that same set of node features (in terms of dimensions) are selected or a different set of features are selected per node. The paper uses the former as it is more straightforward.

    +
  • +
  • +

    Just like before, a “relaxed” mask MT is trained to select features as MT * XS.

    +
  • +
  • +

    One tricky case is where one feature is important but its value is set to 0. In the case, the value will be masked even though it should not be

    +
  • +
  • +

    The workaround is to use Monte Carlo (MC) estimates of marginals of the missing features. This gives a way to assign importance scores to each feature dimension and a form of reparameterization trick is used to perform end-to-end learning.

    +
  • +
  • +

    Masks are encouraged to be discrete by regularizing their element-wise entropy.

    +
  • +
  • +

    Resulting computation graph is valid as in it allows message passing towards the central node v.

    +
  • +
+ +

Multi-Instance Explanations

+ +
    +
  • +

    Given a set of nodes (having the label say y), the task is to obtain a global explanation of the predictions.

    +
  • +
  • +

    For the given class, a prototypical reference node is chosen by computing the mean of embeddings of all the nodes in the class and then selecting the node which is closest to the mean.

    +
  • +
  • +

    Now, compute the important computational graph corresponding to this node and align the computational subgraphs of all the other nodes (in the given class) to reference.

    +
  • +
  • +

    Let A* be the adjacency matrix and X* be the feature matrix for the explanation corresponding to the reference node. Let Av and Xv be the adjacency matrix and feature matrix of the to-ber-aligned computational graph.

    +
  • +
  • +

    A relaed alignment matrix P is optimized to align the nodes and features in the two graphs ie we minimize |PTAvP - A*| + *|PTXvP - X*|

    +
  • +
  • +

    Choosing concise explanations helps in efficient graph matching.

    +
  • +
  • +

    For GNNs that compute attention over the entire graph, edges with low attention weight can be pruned to increase efficiency.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets

    + +
      +
    • +

      Node classification: BA-Shapes, BA-Community, Tree-Cycles, Tree-Grid

      +
    • +
    • +

      Graph classification: MUTAG, Reddit-Binary

      +
    • +
    +
  • +
  • +

    Baselines

    + +
      +
    • +

      GRAD - Compute the gradient of the model loss with respect to the adjacency matrix and the node features to be classified and fix the edges with the highest absolute gradient.

      +
    • +
    • +

      GAT - Graph Attention Network

      +
    • +
    +
  • +
  • +

    The proposed model seems to outperform the baselines both qualitatively and quantitatively. But the results should be taken with a grain of salt as only 2 baselines are considered.

    +
  • +
diff --git a/_site/site/2019/04/02/Meta-Learning-Update-Rules-for-Unsupervised-Representation-Learning.html b/_site/site/2019/04/02/Meta-Learning-Update-Rules-for-Unsupervised-Representation-Learning.html new file mode 100644 index 00000000..a3242fc2 --- /dev/null +++ b/_site/site/2019/04/02/Meta-Learning-Update-Rules-for-Unsupervised-Representation-Learning.html @@ -0,0 +1,122 @@ +

Introduction

+ +
    +
  • +

    Standard unsupervised learning aims to learn transferable features. The paper proposes to learn a transferable learning rule (in an unsupervised manner) that can generalize across tasks and architectures.

    +
  • +
  • +

    Paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    Consider training the model with supervised learning - φt+1 = SupervisedUpdate(φt, xt, yt, θ).

    +
  • +
  • +

    Here t denotes the step, (x, y) denotes the data points, θ denotes the hyperparameters of the optimizer.

    +
  • +
  • +

    Extending this formulation for meta-learning, one could say that t is the step of the inner loop, θ are the parameters of the meta learning model.

    +
  • +
  • +

    Further, the paper proposes to use φt+1 = UnsupervisedUpdate(φt, xt, θ) ie yt is not used (or even assumed to be available as this is unsupervised learning).

    +
  • +
  • +

    The meta update rule is used to learn the weights of a meta-model by performing SGD on the sum of MetaObjective over the distribution of tasks (over the course of inner loop training).

    +
  • +
+ +

Model

+ +
    +
  • +

    Base model: MLP with parameters φt

    +
  • +
  • +

    To ensure that it generalizes across architectures, the update rule is designed to be neural-local ie updates are a function of pre and postsynaptic neurons though, in practice, this constraint is relaxed to decorrelate neurons by using cross neural information.

    +
  • +
  • +

    Each neuron i in every layer l (in the base model) has an update network (MLP) which takes as input the feedforward activations, feedback weights and error signals. ie hbl(i) = MLP(xbl(i), zbl(i), vl+1, +δl(i), θ)

    + +
      +
    • b - index of the minibatch
    • +
    • xl - pre non-linearity activations
    • +
    • zl - post non-linearity activations
    • +
    • vl - feedback weights
    • +
    • δl - error signal
    • +
    +
  • +
  • +

    All the update networks share the meta parameters θ

    +
  • +
  • +

    The model is run in a standard feed-forward manner and the update network (corresponding to each unit) is used to generate the error signal δlb(i) = lin(hbl(i)).

    +
  • +
  • +

    This loss is backpropogated using the set of learned backward weights vl instead of the forward weights wl.

    +
  • +
  • +

    The weight update Δwl is also generated using a per-neuron update network.

    +
  • +
+ +

Meta Objective

+ +
    +
  • +

    The MetaObjective is based on fitting a linear regression model to labeled examples with a small number of data points.

    +
  • +
  • +

    Given the emphasis on learning generalizable features, the weights (of linear regression) are estimated on one batch and evaluated on another batch.

    +
  • +
  • +

    The MetaObjective is to reduce the cosine distance between yb and vTxbL

    + +
      +
    • +

      yb - Actual lables on the evaluation batch

      +
    • +
    • +

      xbL - Features of the evaluation batch (using the base model)

      +
    • +
    • +

      v - parameters of the linear regression model (learned on train batch)

      +
    • +
    +
  • +
+ +

Practical Considerations

+ +
    +
  • +

    Meta gradients are approximated using truncated backdrop through time.

    +
  • +
  • +

    Increasing variation in the training dataset helps the meta optimization process. Data is augmented with shifts, rotations, and noise. Predicting these coefficients is an auxiliary (regression) task for training the meta-objective.

    +
  • +
  • +

    Training the system requires a lot of resources - 8 days with 512 workers.

    +
  • +
+ +

Results

+ +
    +
  • +

    With standard unsupervised learning, the performance (on transfer task) starts declining after some time even though the performance (on the unsupervised task) is improving. This suggests that the objective function for the two tasks starts to mismatch.

    +
  • +
  • +

    UnsupervisedUpdate leads to a better generalization as compared to both VAE and supervised learning (followed by transfer).

    +
  • +
  • +

    UnsupervisedUpdate also leads to a positive transfer across domains (vision to language) when trained for a shorter duration of time (to ensure that the meta-objective does not overfit).

    +
  • +
  • +

    UnsupervisedUpdate also generalizes to larger model architectures and different activation functions.

    +
  • +
diff --git a/_site/site/2019/04/09/Towards-a-natural-benchmark-for-continual-learning.html b/_site/site/2019/04/09/Towards-a-natural-benchmark-for-continual-learning.html new file mode 100644 index 00000000..a0a48e7a --- /dev/null +++ b/_site/site/2019/04/09/Towards-a-natural-benchmark-for-continual-learning.html @@ -0,0 +1,51 @@ +

Introduction

+ +
    +
  • +

    Continual Learning paradigm focuses on learning from a non-stationary stream of data with additional desiderata - transferring knowledge from previously seen task to unseen tasks and being resilient to catastrophic forgetting - all with a fixed memory and computational budget.

    +
  • +
  • +

    This is in contrast to the IID (independent and identically distributed) assumption in statistical learning.

    +
  • +
  • +

    One common example of the non-iid data is setups involving sequential decision making - eg Reinforcement learning.

    +
  • +
  • +

    Paper

    +
  • +
+ +

Benchmark

+ +
    +
  • +

    Many existing benchmarks use MNIST as the underlying dataset (eg Permuted MNIST, Split MNIST, etc). These benchmarks lack complexity and make it hard to observe positive and negative backward transfer.

    +
  • +
  • +

    Most works focus only on the catastrophic forgetting challenge and ignore the other issues (like computation and memory footprint, the capacity of the network, etc).

    +
  • +
  • +

    The paper proposes a new benchmark based on Starcraft II video game to understand the different approaches for lifelong learning.

    +
  • +
  • +

    The sequence of tasks is designed to be a curriculum - the learning agent stats with learning simple skills and later move to more complex tasks. These complex tasks require remembering and composing skills learned in the earlier levels.

    +
  • +
  • +

    To evaluate for catastrophic forgetting, the tasks are designed such that not all the skills are needed for solving each task. Hence the learning agent needs to remember skills even though they are not needed at the current level.

    +
  • +
  • +

    Each level comes with a fixed computational budget of episodes and each episode has a fixed time limit. Once the budget is consumed the agent has to proceed to the next level. Hence agents with better sample efficiency would benefit.

    +
  • +
  • +

    The benchmark supports both RL and supervised learning version. In the supervised version, expert agents (pretrained on each level) are also provided.

    +
  • +
  • +

    Baselines are provided for distillation (using experts): sequential training (fine tuning), Dropout and SER. None of the baseline methods achieve positive or negative backward transfer.

    +
  • +
  • +

    When modeled as a pure RL task, the benchmark is extremely difficult to solve.

    +
  • +
  • +

    The paper suggests using a metric to record the amount of learning/data required to recover performance on the previous task.

    +
  • +
diff --git a/_site/site/2019/05/14/Multiple-Model-Based-Reinforcement-Learning.html b/_site/site/2019/05/14/Multiple-Model-Based-Reinforcement-Learning.html new file mode 100644 index 00000000..9ce04fb3 --- /dev/null +++ b/_site/site/2019/05/14/Multiple-Model-Based-Reinforcement-Learning.html @@ -0,0 +1,56 @@ +
    +
  • +

    The paper presents some general ideas and mechanisms for multiple model-based RL. Even though the task and model architecture may not be very relevant now, I find the general idea and the mechanisms to be quite useful. As such, I am focusing only on high-level ideas and not the implementation details themselves.

    +
  • +
  • +

    The main idea behind Multiple Model-based RL (MMRL) is to decompose complex tasks into multiple domains in space and time so that the environment dynamics within each domain is predictable.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    MMRL proposes an RL architecture composes of multiple modules, each with its own state prediction model and RL controller.

    +
  • +
  • +

    The prediction error from each of the state prediction model defines the “responsibility signal” for each module.

    +
  • +
  • +

    This responsibility signal is used to:

    + +
      +
    • +

      Weigh the state prediction output ie the predicted state is the weighted sum of individual state predictions (weighted by the responsibility signal).

      +
    • +
    • +

      Weigh the parameter update of the environment models as well as the RL controllers.

      +
    • +
    • +

      Weighing the action output - ie predicted action is a weighted sum of individual actions.

      +
    • +
    +
  • +
  • +

    The framework is amenable for incorporating prior knowledge about which module should be selected.

    +
  • +
  • +

    In the modular decomposition of a task, the modules should not change too frequently and some kind of spatial and temporal continuity is also desired.

    +
  • +
  • +

    Temporal continuity can be accounted for by using the previous responsibility signal as input during the current timestep.

    +
  • +
  • +

    Spatial continuity can b ensured by considering a spatial prior like the Gaussian spatial prior.

    +
  • +
  • +

    Though model-free methods could be used for learning the RL controllers, model-based methods could be more relevant given that the modules are learning state-prediction models as well.

    +
  • +
  • +

    Exploration can be ensured by using a stochastic version of greedy action selection.

    +
  • +
  • +

    One failure mode for such modular architectures is when a single module tries to perform well across all the tasks. The modules themselves should be relatively simplistic (eg linear models) which can learn quickly and generalize well.

    +
  • +
  • +

    Non-stationary hunting task in a grid world and non-linear, non-stationary control task of swinging up a pendulum provides the proof of concept for the proposed methods.

    +
  • +
diff --git a/_site/site/2019/05/21/Good-Enough-Compositional-Data-Augmentation.html b/_site/site/2019/05/21/Good-Enough-Compositional-Data-Augmentation.html new file mode 100644 index 00000000..fec0d400 --- /dev/null +++ b/_site/site/2019/05/21/Good-Enough-Compositional-Data-Augmentation.html @@ -0,0 +1,47 @@ +

Introduction

+ +
    +
  • +

    The paper introduces a simple data augmentation protocol that provides a good compositional inductive bias for sequential models.

    +
  • +
  • +

    Synthetic examples are created by taking real sequences and replacing the fragments in sequences which appear in similar environments. This operation is referred to as GECA (Good Enough Compositional Augmentation).

    +
  • +
  • +

    The underlying idea is that if two fragments of training examples occur in some environment, then any environment where the first fragment appears is also a valid environment for the second fragment.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    Discover substitutable fragments (ie pairs of fragments that co-occur with a common fragment) and use them to generate new sequences by swapping fragments.

    +
  • +
  • +

    The current work uses very simple criteria to decide if fragments are substitutable - fragments should occur in at least one lexical environment that is exactly the same. A lexical environment is the k-word window around each span of the fragment.

    +
  • +
  • +

    Though the idea can be motivated by work in generative syntax and distributional semantics, it would not hold like a physical law when applied to the real data.

    +
  • +
  • +

    The authors view this tradeoff as a balance between the shortage of training data vs relative frequency of mistake in the proposed data augmentation approach.

    +
  • +
+ +

Results

+ +
    +
  • +

    The approach is evaluated on the SCAN dataset when the model is trained on the short sequence of English commands. Though the dataset augmentation helps the baseline models, it is not surprising given the nature of the SCAN dataset.

    +
  • +
  • +

    More challenging tasks (for evaluating the proposed approach) are semantic parsing (where the query is represented in the form of λ calculus or SQL and low resource language modeling. While the improvement (in terms of metrics) is sometimes limited, the gains are consistent across different datasets.

    +
  • +
  • +

    Given that the proposed approach is relatively simple and straightforward, it appears to be quite promising.

    +
  • +
diff --git a/_site/site/2019/06/01/Relational-Reinforcement-Learning.html b/_site/site/2019/06/01/Relational-Reinforcement-Learning.html new file mode 100644 index 00000000..724c013d --- /dev/null +++ b/_site/site/2019/06/01/Relational-Reinforcement-Learning.html @@ -0,0 +1,121 @@ +

Introduction

+ +
    +
  • +

    Relational Reinforcement Learning (RRL) paradigm uses relational state (and action) space and policy representation to leverage the generalization capability of relational learning for reinforcement learning.

    +
  • +
  • +

    The paper shows that effectiveness of RRL - in terms of generalization, sample efficiency and interplay - using box-world and StarCraft II minigames.

    +
  • +
  • +

    Link to the paper.

    +
  • +
+ +

Architecture

+ +
    +
  • +

    The main idea is to use neural network models that operate on structured representations and perform relational reasoning via iterated, message-passing style methods.

    +
  • +
  • +

    Use of non-local computations using a shared function (in terms of pairwise interactions between entities) provides a better inductive bias.

    +
  • +
  • +

    Multi-head dot product attention mechanism is used to model the pairwise interactions (with one or more attention blocks).

    +
  • +
  • +

    Iterative computations can be used to capture higher-order interactions between entities.

    +
  • +
  • +

    Entity extraction is based on the assumption that entities are things located at a particular point in space.

    +
  • +
  • +

    A CNN is used to parse the pixel space observation into k feature maps of size nxn. The (x, y) coordinates are concatenated to each k-dimensional pixel feature-vector to indicate the pixel’s position in the map.

    +
  • +
  • +

    The resulting n2 x k matrix acts as the entity matrix.

    +
  • +
  • +

    Actor-critic architecture (using distributed agent IMPALA) is used.

    +
  • +
+ +

Environment

+ +

Box-World

+ +
    +
  • +

    12 x 12-pixel room with keys and boxes placed randomly.

    +
  • +
  • +

    Agent can move in 4 directions.

    +
  • +
  • +

    The task is to collect gems by unlocking boxes (which may contain keys to unlock other boxes).

    +
  • +
  • +

    Each level has a unique sequence in which boxes need to be opened as opening the wrong box could make the level unsolvable.

    +
  • +
  • +

    Difficulty of a level can be controlled using: (i) Number of boxes in the path to the goal. (ii) The number of distractor branches, (iii) Length of distractor branches.

    +
  • +
+ +

StarCraft II minigames

+ +
    +
  • 9 mini games designed as specific scenarios in the Starcraft game are used.
  • +
+ +

Results

+ +

Box-World

+ +
    +
  • +

    RRL agents solve over 98% of the levels while the RL agent solves less than 95% of the levels.

    +
  • +
  • +

    Visualising the attention scores indicate that:

    + +
      +
    • +

      keys attend to locks they can unlock.

      +
    • +
    • +

      all objects attend to agent’s location.

      +
    • +
    • +

      agent and gem attend to each other (and themselves).

      +
    • +
    +
  • +
  • +

    Generalization capacity is tested in two ways:

    + +
      +
    • +

      Performance on levels that require opening a larger sequence of boxes than it is trained on.

      +
    • +
    • +

      Performance on levels that require key-lock combinations not seen during training.

      +
    • +
    +
  • +
  • +

    In both the scenarios, the RRL agent significantly outperforms the RL agent.

    +
  • +
+ +

StarCraft

+ +
    +
  • +

    RLL agent achieves better or equal results that the RL agent in all but one game.

    +
  • +
  • +

    For testing generalization, the agent, that was trained for controlling two marines, was transferred on the task which requires it to control 5 marines. These results are not conclusive given the high variability.

    +
  • +
diff --git a/_site/site/2019/06/08/Meta-Reinforcement-Learning-of-Structured-Exploration-Strategies.html b/_site/site/2019/06/08/Meta-Reinforcement-Learning-of-Structured-Exploration-Strategies.html new file mode 100644 index 00000000..759c1ddf --- /dev/null +++ b/_site/site/2019/06/08/Meta-Reinforcement-Learning-of-Structured-Exploration-Strategies.html @@ -0,0 +1,92 @@ +

Introduction

+ +
    +
  • +

    The paper looks at the problem of learning structured exploration policies for training RL agents.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Structured Exploration

+ +
    +
  • +

    Consider a stochastic, parameterized policy πθ(a|s) where θ represents the policy-parameters.

    +
  • +
  • +

    To encourage exploration, noise can be added to the policy at each time step t. But the noise added in such a manner does not have any notion of temporal coherence.

    +
  • +
  • +

    Another issue is that if the policy is represented by a simple distribution (say parameterized unimodal Gaussian), it can not model complex time-correlated stochastic processes.

    +
  • +
  • +

    The paper proposes to condition the policy on per-episode random variables (z) which are sampled from a learned latent distribution.

    +
  • +
  • +

    Consider a distibution over the tasks p(T). At the start of any episode of the ith task, a latent variable zi is sampled from the distribution N(μi, σi) where μi and σi are the learned parameters of the distribution and are referred to as the variation parameters.

    +
  • +
  • +

    Once sampled, the same zi is used to condition the policy for as long as the current episode lasts and the action is sampled from then distribution πθ(a|s, zi).

    +
  • +
  • +

    The intuition is that the latent variable zi would encode the notion of a task or goal that does not change arbitrarily during the episode.

    +
  • +
+ +

Model Agnostic Exploration with Structured Noise

+ +
    +
  • +

    The paper focuses on the setting where the structured exploration policies are to be learned while leveraging the learning from prior tasks.

    +
  • +
  • +

    A meta-learning approach, called as model agnostic exploration with structured noise (MAESN) is proposed to learn a good initialization of the policy-parameters and to learn a latent space (for sampling the z from) that can inject structured stochasticity in the policy.

    +
  • +
  • +

    General meta-RL approaches have two limitations when it comes to “learning to explore”:

    + +
      +
    • Casting meta-RL problems as RL problems lead to policies that do not exhibit sufficient variability to explore effectively.
    • +
    • Many current approaches try to meta-learn the entire learning algorithm which limits the asymptotic performance of the model.
    • +
    +
  • +
  • +

    Idea behind MAESN is to meta-train policy-parameters so that they learn to use the task-specific latent variables for exploration and can quickly adapt to a new task.

    +
  • +
  • +

    An important detail is that the parameters are optimized to maximize the expected rewards after one step of gradient update to ensure that the policy uses the latent variables for exploration.

    +
  • +
  • +

    For every iteration of meta-training, an “inner” gradient update is performed on the variational parameters and the post-inner-update parameters are used to perform the meta-update.

    +
  • +
  • +

    The authors report that performing the “inner” gradient update on the policy-parameters does not help the overall learning objective and that the step size for each parameter had to be meta-learned.

    +
  • +
  • +

    The variation parameters have the usual KL divergence loss which encourages them to be close to the prior distribution (unit Gaussian in this case).

    +
  • +
  • +

    After training, the variational parameters for each task are quite close to the prior probably because the training objective optimizes for the expected reward after one step of gradient descent on the variational parameters.

    +
  • +
  • +

    Another implementation detail is that reward shaping is used to ensure that the policy gets useful signal during meta-training. To be fair to the baselines, reward shaping is used while training baselines as well. Moreover, the policies trained with reward shaping generalizes to sparse reward setup as well (during meta-test time).

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Three tasks distributions: Robotic Manipulation, Wheeled Locomotion, and Legged Locomotion. Each task distribution has 100 meta-training tasks.

    +
  • +
  • +

    In the Manipulation task distribution, the learner has to push different blocks from different positions to different goal positions. In the Locomotion task distributions, the different tasks correspond to the different goal positions.

    +
  • +
  • +

    The experiments show that the proposed approach can adapt to new tasks quickly and the learn coherent exploration strategy.

    +
  • +
+ +

• In some cases, learning from scratch also provides a strong asymptotic performance although learning from scratch takes much longer.

diff --git a/_site/site/2019/06/13/Extrapolating-Beyond-Suboptimal-Demonstrations-via-Inverse-Reinforcement-Learning-from-Observations.html b/_site/site/2019/06/13/Extrapolating-Beyond-Suboptimal-Demonstrations-via-Inverse-Reinforcement-Learning-from-Observations.html new file mode 100644 index 00000000..a2a21f00 --- /dev/null +++ b/_site/site/2019/06/13/Extrapolating-Beyond-Suboptimal-Demonstrations-via-Inverse-Reinforcement-Learning-from-Observations.html @@ -0,0 +1,72 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a new inverse RL (IRL) algorithm, called as Trajectory-ranked Reward EXtrapolation (T-REX) that learns a reward function from a collection of ranked trajectories.

    +
  • +
  • +

    Standard IRL approaches aim to learn a reward function that “justifies” the demonstration policy and hence those approaches cannot outperform the demonstration policy.

    +
  • +
  • +

    In contrast, T-REX aims to learn a reward function that “explains” the ranking over demonstrations and can learn a policy that outperforms the demonstration policy.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    The input is a sequence of trajectories T1, … Tm which are ranked in the order of preference. That is, given any pair of trajectories, we know which of the two trajectories is better.

    +
  • +
  • +

    The setup is to learn from observations where the learning agent does not have access to the true reward function or the action taken by the demonstration policy.

    +
  • +
  • +

    Reward Inference

    + +
      +
    • +

      A parameterized reward function rθ is trained with the ranking information using a binary classification loss function which aims to predict which of the two given trajectory would be ranked higher.

      +
    • +
    • +

      Given a trajectory, the reward function predicts the reward for each state. The sum of rewards (corresponding to the two trajectories) is used used to predict the preferred trajectory.

      +
    • +
    • +

      T-REX uses partial trajectories instead of full trajectories as a data augmentation strategy.

      +
    • +
    +
  • +
  • +

    Policy Optimization

    + +
      +
    • Once a reward function has been learned, standard RL approaches can be used to train a new policy.
    • +
    +
  • +
+ +

Results

+ +
    +
  • +

    Environments: Mujoco (Half Cheetah, Ant, Hooper), Atari

    +
  • +
  • +

    Demonstrations generated using PPO (checkpointed at different stages of training).

    +
  • +
  • +

    Ensemble of networks used to learn the reward functions.

    +
  • +
  • +

    The proposed approach outperforms the baselines Behaviour Cloning from Observations and Generative Adversarial Imitation Learning.

    +
  • +
  • +

    In terms of reward extrapolation, T-REX can predict the reward for trajectories which are better than the demonstration trajectories.

    +
  • +
  • +

    Some ablation studies considered the effect of adding noise (random swapping the preference between trajectories) and found that the model is somewhat robust to noise up to an extent.

    +
  • +
diff --git a/_site/site/2019/06/20/Hamiltonian-Neural-Networks.html b/_site/site/2019/06/20/Hamiltonian-Neural-Networks.html new file mode 100644 index 00000000..ee138e4d --- /dev/null +++ b/_site/site/2019/06/20/Hamiltonian-Neural-Networks.html @@ -0,0 +1,79 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a very cool idea at the intersection of deep learning and physics.

    +
  • +
  • +

    The idea is to train a neural network architecture that builds on the concept of Hamiltonian Mechanics (from Physics) to learn physical conservation laws in an unsupervised manner.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code

    +
  • +
  • +

    Link to author’s blog

    +
  • +
+ +

Hamiltonian Mechanics

+ +
    +
  • +

    It is a branch of physics that can describe systems which follow some conservation laws and invariants.

    +
  • +
  • +

    Consider a set of N pair of coordinates [(q1, p1), …, (qN, pN)] where q = [q1, …, qN] dnotes the position of the set of objects while p = [p1, …, pN] denotes the momentum of the set of variables.

    +
  • +
  • +

    Together these N pairs completely describe the system.

    +
  • +
  • +

    A scalar function H(q, p), called as the Hamiltonian is defined such that the partial derivative of H with respect to p is equal to derivative of q with respect to time t and the negative of partial derivative of H with respect to q is equal to derivative of p with respect to time t.

    +
  • +
  • +

    This can be expressed in the form of the equation as follows:

    +
  • +
+ +

equation1

+ +
    +
  • The Hamiltonian can be tied to the total energy of the system and can be used in any system where the total energy is conserved.
  • +
+ +

Hamiltonian Neural Network (HNN)

+ +
    +
  • +

    The Hamiltonian H can be parameterized using a neural network and can learn conserved quantities from the data in an unsupervised manner.

    +
  • +
  • +

    The loss function looks as follows:

    +
  • +
+ +

equation2

+ +
    +
  • The partial derivatives can be obtained by computing the in-graph gradient of the output variables with respect to the input variables.
  • +
+ +

Observations

+ +
    +
  • +

    For setups where the energy must be conserved exactly, (eg ideal mass-spring and ideal pendulum), the HNN learn to preserve an energy-like scalar.

    +
  • +
  • +

    For setups where the energy need not be conserved exactly, the HNNs still learn to preserve the energy thus highlighting a limitation of HNNs.

    +
  • +
  • +

    In case of two body problems, the HNN model is shown to be much more robust when making predictions over longer time horizons as compared to the baselines.

    +
  • +
  • +

    In the final experiment, the model is trained on pixel observations and not state observations. In this case, two auxiliary losses are added: auto-encoder reconstruction loss and a loss on the latent space representations. Similar to the previous experiments, the HNN model makes robust predictions over much longer time horizons.

    +
  • +
diff --git a/_site/site/2019/06/27/Measuring-Abstract-Reasoning-in-Neural-Networks.html b/_site/site/2019/06/27/Measuring-Abstract-Reasoning-in-Neural-Networks.html new file mode 100644 index 00000000..70ab4f5e --- /dev/null +++ b/_site/site/2019/06/27/Measuring-Abstract-Reasoning-in-Neural-Networks.html @@ -0,0 +1,165 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a dataset to diagnose the abstract reasoning capabilities of learning systems.

    +
  • +
  • +

    The paper shows that a variant of the relational networks, explicitly designed for abstract reasoning, outperforms models like ResNets.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    Visual reasoning tasks, that are inspired by the human IQ test, are used to evaluate the models in terms of generalization.

    +
  • +
  • +

    Let’s say that we want to test if the model understands the abstract notion of “increasing”. We could train the model on data that captures the notion of “increasing”, in terms of say increasing size (or quantities) of objects and then test it on a dataset where the notion is expressed in terms of increasing intensity of color.

    +
  • +
  • +

    The dataset is then used to evaluate if the models can find any solution to such abstract reasoning tasks and how well they generalize when the abstract content is specifically controlled.

    +
  • +
+ +

Dataset

+ +

Raven’s Progressive Matrics (RPMs):

+ +
    +
  • +

    Consists of an incomplete 3x3 matrix of images where the missing image needs to be filled in, typically by choosing from a set of candidate images.

    +
  • +
  • +

    As such, it is possible to justify multiple answers to be correct though, in practice, the right answer is the one with the simplest explanation.

    +
  • +
+ +

Procedurally Generated Matrices (PGMs)

+ +
    +
  • +

    Generating RPM like matrices procedurally by building an abstract structure for matrices.

    +
  • +
  • + + + + + + + +
    The abstract structure S consists of 3 components: (i) Relation types (R), (ii) Object types (O) and (iii) Attribute types (A). ie *S = {(r, o, a)r in R, o in O and a in A}*.
    +
  • +
  • +

    This can be read as: “Structure S is instantiated on attribute a of object o and exhibits the relation r”. For example, S is instantiated on “color” of object “shape” and exhibits the relation “increasing”.

    +
  • +
  • +

    In general, the structure could be made of more than one such tuple and more are the tuples, harder is the task.

    +
  • +
  • +

    Given the structure, sample values v for each attribute a while conforming with the relation r. For example, if the attribute is “color” and the relation is “increasing”, the intensity of color must increase.

    +
  • +
  • The resulting structure is rendered as pixels.
  • +
+ +

Test for Generalization

+ +
    +
  • +

    The paper tests for the following generalization scenarios:

    +
  • +
  • +

    Neutral: The structure of the training and test data can contain any tuple.

    +
  • +
  • +

    Interpolation: The training data contains even-indexed members of the attribute values while the test data contains odd-indexed members of the attribute values.

    +
  • +
  • +

    Extrapolation: The training data contains first-half of the attribute values while the test data contains the second-half of the attribute values.

    +
  • +
  • +

    Heldout attribute: Training data contains no tuples with (o = shape, a = color) or (o = line, a = type).

    +
  • +
  • +

    Heldout triples: Out of 29 possible triples, 7 are held out from training and only used during testing.

    +
  • +
  • +

    Heldout pair-of-triples: Out of 400 possible sets of pair of triples, 40 were held out and used only during testing.

    +
  • +
  • +

    Heldout pair-of-triples: Out of 400 possible sets of pair of triples, 40 were held out and used only during testing.

    +
  • +
  • +

    Heldout attribute pair: Out of 20 (unordered) variable attribute pairs, 4 were held out and used only during testing.

    +
  • +
+ +

Models

+ +
    +
  • +

    Input: 8 context panels (from the 3x3) matrix where the last panel needs to be filled.

    +
  • +
  • +

    CNN-MLP - 4 layer CNN with batchnorm and ReLU.

    +
  • +
  • +

    ResNet - ResNet-50 (as it perfomed better than ResNet-101 and ResNet-152).

    +
  • +
  • +

    LSTM

    +
  • +
  • +

    Wild Relation Network (WReN) - A CNN model encodes the 8 panels and the candidate answers and feeds them as input to a relational network.

    +
  • +
  • +

    Context-blind ResNet - ResNet network without the context (or the 8 input panels).

    +
  • +
+ +

Results

+ +
    +
  • +

    WReN model outperforms the other models on the Neutral setup.

    +
  • +
  • +

    Models have a harder time differentiating between size than quantity.

    +
  • +
  • +

    WRen is the best performing models in all the setups and rest of the discussion only applies to that model.

    +
  • +
  • +

    Generalisation is easy in the context of interpolation while worst in case of extrapolation, hinting at the limited generalization capability of the models.

    +
  • +
+ +

Auxiliary Training

+ +
    +
  • +

    The model is also trained to predict the relevant relation, object and attribute types using the meta-targets that encode this information.

    +
  • +
  • +

    The auxiliary training helps in all the cases. Further, the model’s accuracy on the main task is where in the cases where it solves the auxiliary tasks well.

    +
  • +
+ +

Key Takeaway

+ +
    +
  • +

    For abstract visual reasoning tasks, the choice of models can make a large difference, the case in consideration of ResNets vs Relational Networks.

    +
  • +
  • +

    Using auxiliary loss that encourages the model to “explain” its reasoning (in this case by predicting the attributes, relations, etc) helps to improve the performance on the main task as well.

    +
  • +
  • +

    Given that the challenge is motivated by tasks used to measure human IQ, it would have been interesting to get an estimate of human performance on at least a subset of this dataset.

    +
  • +
diff --git a/_site/site/2019/07/18/Set-Transformer-A-Framework-for-Attention-based-Permutation-Invariant-Neural-Networks.html b/_site/site/2019/07/18/Set-Transformer-A-Framework-for-Attention-based-Permutation-Invariant-Neural-Networks.html new file mode 100644 index 00000000..02555698 --- /dev/null +++ b/_site/site/2019/07/18/Set-Transformer-A-Framework-for-Attention-based-Permutation-Invariant-Neural-Networks.html @@ -0,0 +1,103 @@ +

Introduction

+ +
    +
  • +

    Consider problems where the input to the model is a set. In such problems (referred to as the set-input problems), the model should be invariant to the permutation of the data points.

    +
  • +
  • +

    In “set pooling” methods (1, 2), each data point (in the input set) is encoded using a feed-forward network and the resulting set of encoded representations are pooled using the “sum” operator.

    +
  • +
  • +

    This approach can be shown to be bot permutation-invariant and a universal function approximator.

    +
  • +
  • +

    The paper proposes an attention-based network module, called as the Set Transformer, which can model the interactions between the elements of an input set while being permutation invariant.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Transformer

+ +
    +
  • +

    An attention function Attn(Q, K, V) = (QKT)V is used to map queries Q to output using key-value pairs K, V.

    +
  • +
  • +

    In case of multi-head attention, the key, query, and value are projected into h different vectors and attention is applied on all these vectors. The output is a linear transformation of the concatenation of all the vectors.

    +
  • +
+ +

Set Transformer

+ +
    +
  • +

    3 modules are introduced: MAB, SAB and ISAB.

    +
  • +
  • +

    Multihead Attention Block (MAB) is a module very similar to to the encoder in the Transformer, without the positional encoding and dropout.

    +
  • +
  • +

    Set Attention Block (SAB) is a module that takes as input a set and performs self-attention between the elements of the set to produce another set of the same size ie SAB(X) = MAB(X, X).

    +
  • +
  • +

    The time complexity of the SAB operation is O(n2) where n is the number of elements in the set. It can be reduced to O(m*n) by using Induced Set Attention Blocks (ISAB) with m induced point vectors (denoted as I).

    +
  • +
  • +

    ISABm = MAB(X, MAB(I, X)).

    +
  • +
  • +

    ISAB can be seen as performing a low-rank projection of inputs.

    +
  • +
  • +

    These modules can be used to model the interactions between data points in any given set.

    +
  • +
+ +

Pooling by Multihead Attention (PMA)

+ +
    +
  • +

    Aggregation is performed by applying multi-head attention on a set of k seed vectors.

    +
  • +
  • +

    The interaction between the k outputs (from PMA) can be modeled by applying another SAB.

    +
  • +
  • +

    Thus the entire network is a stack of SABs and ISABs. Both the modules are permutation invariant and so is any network obtained by stacking them.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets include:

    + +
      +
    • Predicting the maximum value from a set.
    • +
    • Counting unique (Omniglot) characters from an image.
    • +
    • Clustering with a mixture of Gaussians (synthetic points and CIFAR 100).
    • +
    • Set Anomaly detection (celebA).
    • +
    +
  • +
  • +

    Generally, increasing m (the number of inducing datapoints) improve performance, to some extent. This is somewhat expected.

    +
  • +
  • +

    The paper considers various ablations of the proposed approach (like disabling attention in the encoder or pooling layer) and shows that attention mechanism is needed during both the stages.

    +
  • +
  • +

    The work has two main benefits over prior work:

    + +
      +
    • +

      Reducing the O(n2) complexity to O(m*n) complexity.

      +
    • +
    • +

      Using self-attention mechanism for both encodings the inputs and for aggregating the encoded representations.

      +
    • +
    +
  • +
diff --git a/_site/site/2019/07/25/Quantifying-Generalization-in-Reinforcement-Learning.html b/_site/site/2019/07/25/Quantifying-Generalization-in-Reinforcement-Learning.html new file mode 100644 index 00000000..5db404b6 --- /dev/null +++ b/_site/site/2019/07/25/Quantifying-Generalization-in-Reinforcement-Learning.html @@ -0,0 +1,128 @@ +

Introduction

+ +
    +
  • +

    The paper introduces a new, procedurally generated environment called as CoinRun that is designed to benchmark the generalization capabilities of RL algorithms.

    +
  • +
  • +

    The paper reports that deep convolutional architectures and techniques like L2 regularization, batch norm, etc (which were proposed in the context of generalization in supervised learning) are also useful for RL.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

CoinRun Environment

+ +
    +
  • +

    CoinRun is made of multiple levels.

    +
  • +
  • +

    In each level, the agent spawns on the far left side and needs to collect a single coin that lies on the far right side.

    +
  • +
  • +

    There are many obstacles in between and colliding with an obstacle leads to agent’s death.

    +
  • +
  • +

    Each episode extends for a maximum for 1000 steps.

    +
  • +
  • +

    CoinRun is designed such that given sufficient training time and levels, a near-optimal policy can be learned for all the levels.

    +
  • +
+ +

Generalization

+ +
    +
  • +

    Generalization can be measure by training an agent on a given set of training tasks and evaluating on an unseen set of test tasks.

    +
  • +
  • +

    9 agents are trained to play CoinRun, on different training sets (each with a different number of levels).

    +
  • +
  • +

    The first 8 agents are trained on sets of size 100 to 16000 levels while the last agent is trained on an unbounded set of levels.

    +
  • +
  • +

    Training a model on an unbounded set of levels provides a good proxy for the train-to-test generalization performance.

    +
  • +
+ +

Evaluating Architectures

+ +
    +
  • +

    Two convolutional architectures (of different sizes) are compared:

    + +
      +
    • +

      Nature-CNN: The CNN architecture used in the Deep Q Network. This is the smaller network among the two models.

      +
    • +
    • +

      IMPALA-CNN: The CNN architecture used in the Imapla architecture.

      +
    • +
    +
  • +
  • +

    IMPALA-CNN agent always outperforms the Nature-CNN agent indicating that larger architecture has more capacity for generalization. But increasing the network size beyond a limit gives diminishing returns.

    +
  • +
+ +

Evaluating Regularization

+ +
    +
  • +

    While both L2 regularization and Dropout helps to improve generalization, L2 regularization is more impactful.

    +
  • +
  • +

    A domain randomization/data augmentation approach is tested where rectangular regions of different sizes are masked and assigned a random color. This approach seems to improve performance.

    +
  • +
  • +

    Batch Normalization helps to improve performance as well.

    +
  • +
  • +

    Environment stochasticity is introduced by using sticky actions while policy stochasticity is introduced by controlling the entropy bonus. Both these forms of stochasticity boost performance.

    +
  • +
  • +

    While combining different regularization methods help, the gains are only marginally better than using just 1 regularization approach. This suggests that these different approaches induce similar generalization properties.

    +
  • +
+ +

Additional Environments

+ +
    +
  • +

    Two additional environments are also considered to verify the high degree of overfitting observed in the CoinRun environment:

    + +
      +
    • +

      CoinRun-Platforms:

      + +
        +
      • +

        Unlike CoinRun, each episode can have multiple coins and the time limit is 0increased to 1000 steps.

        +
      • +
      • +

        Levels are larger as well so the agent might need to backtrack their steps.

        +
      • +
      +
    • +
    • +

      RandomMazes:

      + +
        +
      • +

        Partially observed environment with square mazes of dimensions 3x3 to 25x25.

        +
      • +
      • +

        Timelimit of 500 steps

        +
      • +
      +
    • +
    +
  • +
  • +

    Overfitting is observed for both these environments as well.

    +
  • +
diff --git a/_site/site/2019/08/01/Assessing-Generalization-in-Deep-Reinforcement-Learning.html b/_site/site/2019/08/01/Assessing-Generalization-in-Deep-Reinforcement-Learning.html new file mode 100644 index 00000000..23fa865e --- /dev/null +++ b/_site/site/2019/08/01/Assessing-Generalization-in-Deep-Reinforcement-Learning.html @@ -0,0 +1,141 @@ +
    +
  • +

    The paper presents a benchmark and experimental protocol (environments, metrics, baselines, training/testing setup) to evaluate RL algorithms for generalization.

    +
  • +
  • +

    Several RL algorithms are evaluated and the key takeaway is that the “vanilla” RL algorithms can generalize better than the RL algorithms that are specifically designed to generalize, given enough diversity in the distribution of the training environments.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    The focus is on evaluating generalization to environmental changes that affect the system dynamics (and not the goal or rewards).

    +
  • +
  • +

    Two generalization regimes are considered:

    + +
      +
    • +

      Interpolation - parameters of the test environment are similar to the parameters of the training environment.

      +
    • +
    • +

      Extrapolation - parameters of the test environment are different from the parameters of the training environment.

      +
    • +
    +
  • +
  • +

    Following algorithms are considered as part of the benchmark:

    + +
      +
    • +

      “Vanilla” RL algorithms - A2C, PPO

      +
    • +
    • +

      RL algorithms that are designed to generalize:

      + +
        +
      • +

        EPOpt - Learn a (robust) policy that maximizes the expected reward over the most difficult distribution of environments (ones with the worst expected reward).

        +
      • +
      • +

        RL2 - Learn an (adaptive) policy that can adapt to the current environment/task by considering the trajectory and not just the state transition sequence.

        +
      • +
      +
    • +
    • +

      These specially designed RL algorithms can be optimized using either A2C or PPO leading to combinations like EPOpt-A2C or EPOpt-PPO etc.

      +
    • +
    • +

      The models are either composed of feedforward networks completely or feedforward + recurrent networks.

      +
    • +
    +
  • +
  • +

    Environments

    + +
      +
    • +

      CartPole, MountainCar, Acrobot, and Pendulum from OpenAI Gym.

      +
    • +
    • +

      HalfCheetah and Hopper from OpenAI Roboschool.

      +
    • +
    • +

      Three versions of each environment are considered:

      + +
        +
      • +

        Deterministic: Environment parameters are fixed. This case corresponds to the standard environment setup in classical RL.

        +
      • +
      • +

        Random: Environment parameters are sampled randomly. This case corresponds to sampling from a distribution of environments.

        +
      • +
      • +

        Extreme: Environment parameters are sampled from their extreme values. This case corresponds to the edge-case environments which would not be encountered during training generally.

        +
      • +
      +
    • +
    +
  • +
  • +

    Performance Metrics

    + +
      +
    • +

      Average total reward per episode.

      +
    • +
    • +

      Success percentage: Percentage of episodes where a certain goal (or reward) is obtained.

      +
    • +
    +
  • +
  • +

    Evaluation Metrics/Setups

    + +
      +
    • +

      Default: success percentage when training and evaluating the deterministic version of the environment.

      +
    • +
    • +

      Interpolation: success percentage when training and evaluating on the random version of the environment.

      +
    • +
    • +

      Extrapolation: the geometric mean of the success percentage of following three versions:

      + +
        +
      • +

        Train on deterministic and evaluate on the random version.

        +
      • +
      • +

        Train on deterministic and evaluate on extreme version.

        +
      • +
      • +

        Train on random and evaluate on the extreme version.

        +
      • +
      +
    • +
    +
  • +
  • +

    Observations

    + +
      +
    • +

      Extrapolation is harder than interpolation.

      +
    • +
    • +

      Increasing the diversity in the training environments improves the interpolation generalization of vanilla RL methods.

      +
    • +
    • +

      EPOpt improves generalization only for continuous control environments and only with PPO.

      +
    • +
    • +

      RL2 is difficult to train on the environments considered and did not provide a clear advantage in terms of generalization.

      +
    • +
    • +

      EPOpt-PPO outperforms PPO on only 3 environments and EPOpt-A2C does not

      +
    • +
    +
  • +
+ diff --git a/_site/site/2019/08/08/Deep-Reinforcement-Learning-in-a-Handful-of-Trials-using-Probabilistic-Dynamics-Models.html b/_site/site/2019/08/08/Deep-Reinforcement-Learning-in-a-Handful-of-Trials-using-Probabilistic-Dynamics-Models.html new file mode 100644 index 00000000..ecfcd8ec --- /dev/null +++ b/_site/site/2019/08/08/Deep-Reinforcement-Learning-in-a-Handful-of-Trials-using-Probabilistic-Dynamics-Models.html @@ -0,0 +1,81 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a new algorithm called as Probabilistic Ensemble with Trajectory sampling (PETS) that combines uncertainty aware deep learning models (ensemble of deep learning models that encode uncertainty) with sampling-based uncertainty propagation.

    +
  • +
  • +

    PETS improves over other probabilistic MBRL approaches by isolating epistemic uncertainty (due to limited training data) and aleatoric uncertainty (inherent in the system).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Uncertainty-Aware Neural Network Dynamics Model

+ +
    +
  • +

    Aleatoric uncertainty can be accounted for by learning a parameterized distribution (probabilistic neural network) trained with negative log-likelihood.

    +
  • +
  • +

    Epistemic uncertainty can be accounted for by either having an infinite amount of data or by using ensembles.

    +
  • +
  • +

    The paper uses a neural network to predict the mean and standard deviation of a gaussian distribution which defines the predictive model. This setup is referred to as the “probabilistic” model and denoted by P.

    +
  • +
  • +

    The alternate setup of the deterministic model is where a neural network is used to make a point prediction (and is denoted by D).

    +
  • +
  • +

    Ensemble of probabilistic models is denoted as PE while that of deterministic models is denoted as DE.

    +
  • +
+ +

Planning and Control with learned Dynamics

+ +
    +
  • +

    Model Predictive Control (MPC) is used for planning.

    +
  • +
  • +

    Given a start state and an action sequence, the probabilistic dynamics model induces a distribution over the trajectories.

    +
  • +
  • +

    The first action, among the sequence of optimized actions, is executed.

    +
  • +
  • +

    Instead of random shooting, Cross Entropy Method (CEM) is used.

    +
  • +
+ +

Trajectory Sampling

+ +
    +
  • +

    Let us say there are B bootstrap models in the ensemble. Given the current state, P particles are created and each particle is propagated using one of the bootstrap models. Two variants are considered:

    + +
      +
    • +

      TS1 - At each timestep, each particle samples a bootstrap. In this case, particle separation can not be attributed to ti the compounding effects of the bootstraps.

      +
    • +
    • +

      TS$\infty$ - The bootstrapped model (per particle) is sampled just once and is not changed after that. This setup separates aleatoric and epistemic uncertainty. Aleatoric state variance is the average variance of the particles of the same bootstrap while epistemic state variance is the variance of the average of particles of same bootstrap indexes.

      +
    • +
    +
  • +
+ +

Result

+ +
    +
  • +

    The proposed approach reaches the asymptotic performance of state-of-the-art model-free algorithms in much fewer samples.

    +
  • +
  • +

    The general performance trend is probabilistic emsemble > probabilisitc model > deterministc ensemble > determinisitc model./.

    +
  • +
  • +

    Initial experiments for learning policy by propagating gradients through the ensemble of models did not work and has been left as future work.

    +
  • +
diff --git a/_site/site/2019/08/15/Abductive-Commonsense-Reasoning.html b/_site/site/2019/08/15/Abductive-Commonsense-Reasoning.html new file mode 100644 index 00000000..543daada --- /dev/null +++ b/_site/site/2019/08/15/Abductive-Commonsense-Reasoning.html @@ -0,0 +1,80 @@ +

Introduction

+ +
    +
  • +

    The paper presents the task of abductive NLP (pronounced as alpha NLP) where the model needs to perform abductive reasoning.

    +
  • +
  • +

    Abductive reasoning is the inference to the most plausible explanation. Even though it is considered to be an important component for understanding narratives, the work in this domain is sparse.

    +
  • +
  • +

    A new dataset called as Abstractive Reasoning in narrative Text (ART) consisting of 20K narrative contexts and 200k explanations is also provided. The dataset models the task as multiple-choice questions to make the evaluation process easy.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Task Setup

+ +
    +
  • +

    Given a pair of observations O1 and O2 and two hypothesis h1 and h2, the task is to select the most plausible hypothesis.

    +
  • +
  • +

    In general, P(h | O1, O2) is propotional to P(h |O1)P(O2|h, O1).

    +
  • +
  • +

    Different independence assumptions can be imposed on the structure of the problem eg one assumption could be that the hypothesis is independent of the observations or the “fully connected” assumption would jointly model both the observations and the hypothesis.

    +
  • +
+ +

Dataset

+ +
    +
  • +

    Along with crowdsourcing several plausible hypotheses for each observation instance pair, an adversarial filtering algorithm (AF) is used to remove weak pairs of hypothesis.

    +
  • +
  • +

    Observation pairs are created using the ROCStories dataset which is a collection of short, manually crafted stories of 5 sentences.

    +
  • +
  • +

    The average word length for both the content and the hypothesis is between 8 to 9.

    +
  • +
  • +

    To collect plausible hypothesis, the crowd workers were asked to fill in a plausible “in-between” sentence in natural language.

    +
  • +
  • +

    Given the plausible hypothesis, the crowd workers were asked to create an implausible hypothesis by editing fewer than 6 words.

    +
  • +
  • +

    Adversarial filtering approach from Zellers et al. is used with BERT as the adversary. A temperature parameter is introduced to control the maximum number of instances that can be changed in each adversarial filtering iteration.

    +
  • +
+ +

Key Observations

+ +
    +
  • +

    Human performance: 91.4%

    +
  • +
  • +

    Baselines like SVM classifier, the bag-of-words classifier (using Glove) and max-pooling overt BiLSTM representation: approx 50%

    +
  • +
  • +

    Entailment NLI baseline: 59%. This highlights the additional complexity of abductive NLI as compared to entailment NLI.

    +
  • +
  • +

    BERT: 68.9%

    +
  • +
  • +

    GPT: 63.1%

    +
  • +
  • +

    Numerical and spatial knowledge-based data points are particularly hard.

    +
  • +
  • +

    The model is more likely to fail when the narrative created by the incorrect hypothesis is plausible

    +
  • +
+ diff --git a/_site/site/2019/08/22/Large-Memory-Layers-with-Product-Keys.html b/_site/site/2019/08/22/Large-Memory-Layers-with-Product-Keys.html new file mode 100644 index 00000000..e499a864 --- /dev/null +++ b/_site/site/2019/08/22/Large-Memory-Layers-with-Product-Keys.html @@ -0,0 +1,105 @@ +

Introduction

+ +
    +
  • The paper proposes a structured key-value memory layer that: +
      +
    • Can scale to a very large size (and capacity).
    • +
    • Has very low computational overhead.
    • +
    • Supports exact search in the keyspace.
    • +
    • Can be easily integrated with neural networks.
    • +
    +
  • +
  • Link to the paper
  • +
+ +

Architecture

+ +
    +
  • +

    The memory layer is composed of 3 components:

    + +
      +
    • +

      Query Network

      + +
        +
      • Maps input to a latent space.
      • +
      • Can be implemented as a feed-forward network.
      • +
      • Adding batch-norm on top of the query network helps to spread out keys.
      • +
      +
    • +
    • +

      Key selection module

      + +
        +
      • Lets say there are a total of K keys of dimensionality dq of which we want to select top k keys.
      • +
      • Partition the set of keys into two sets of subkeys (say Q1 and Q2) where each subset has K keys of dimensionality d_q/2.
      • +
      • The query is split into two subqueries (say q1 and q2).
      • +
      • Each of these two queries are compared with every query in their corresponding set of subkeys.
      • +
      • For example, q1 is compared with every query is Q1.
      • +
      • Top k ranked keys are selected from each set to create two new sets C1 and C22.
      • +
      • The keys from these two sets are combined uder the concatenation operator to obtain k2 vectors.
      • +
      • the final top k (concatenated) keys are searched from these *k2* keys.
      • +
      • The overall complexity is $O((\sqrt K + k^2) \times d_q)$ where K is the total number of keys (whiuc)
      • +
      +
    • +
    • +

      Value lookup table

      + +
        +
      • The values (corresponding to selected subkeys) are aggregated (using weighted sum operation) to obtain the output.
      • +
      +
    • +
    +
  • +
  • +

    All the parameters are trainable, though, in practice, only the selected k memory slots are updated.

    +
  • +
  • +

    Using Multihead attention mechanism helps to improve the performance further.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    1 or more feedforward layers in transformers are placed by the memory layers.

    +
  • +
  • +

    The model is evaluated on large scale language modeling tasks with 140 Gb of data from common crawl corpora (28n billion words).

    +
  • +
  • +

    Evaluation metrics

    + +
      +
    • +

      Perplexity on the test set.

      +
    • +
    • +

      Fraction of accessed values.

      +
    • +
    • +

      KL divergence between the (normalized) weights of key access and uniform distribution.

      +
    • +
    • +

      The last two metrics are used together to determine how well the keys are utilized.

      +
    • +
    +
  • +
+ +

Results

+ +
    +
  • +

    Given the large size of the training dataset, adding more layers to the transformer model helps.

    +
  • +
  • +

    Effect of using memory layer is more powerful than the effect of adding new layers to the transformer. For example, a 12 layer transformer + memory layer outperforms a 24 layer transformer while being almost twice as fast.

    +
  • +
  • +

    The best position to place the memory is at an intermediate layer and placing the memory layer right after the input or just before the softmax layer does not work well in practice.

    +
  • +
+ diff --git a/_site/site/2019/08/29/PHYRE-A-New-Benchmark-for-Physical-Reasoning.html b/_site/site/2019/08/29/PHYRE-A-New-Benchmark-for-Physical-Reasoning.html new file mode 100644 index 00000000..41996628 --- /dev/null +++ b/_site/site/2019/08/29/PHYRE-A-New-Benchmark-for-Physical-Reasoning.html @@ -0,0 +1,164 @@ +

Introduction

+ +
    +
  • +

    The paper proposes the PHYRE (PHYsical REasoning) benchmark - consisting of classic mechanical puzzles in 2D physical environments - as a means to evaluate the physical reasoning ability of machine learning models.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Environment

+ +
    +
  • +

    2D world that obeys Newtonian mechanics.

    +
  • +
  • +

    Gravitational force + Friction.

    +
  • +
  • +

    Non-deformable objects that can be static (ie fixed) or dynamic (ie can move and are affected by collisions etc).

    +
  • +
+ +

Task

+ +
    +
  • +

    The learning agent starts in some initial world state (ie configuration of objects).

    +
  • +
  • +

    Goal is described in the form of (subject, relation, object) where the agent’s task is to satisfy the relation between the subject and the object.

    +
  • +
  • +

    Currently, only the “touch” relation is supported.

    +
  • +
+ +

Setup

+ +
    +
  • +

    The learning agent has to take a single action - placing one or more new dynamic objects in the world.

    +
  • +
  • +

    A simulator is run on the new configuration (for a fixed amount of time) to check if the goal condition is satisfied.

    +
  • +
  • +

    At the end of the simulation, a binary reward and intermediate observations (collected as the simulator executes) are provided to the learning agent.

    +
  • +
  • +

    These observations are 256*256 grids where each grid cell can take 1 of the 7 values (denoting different types of objects).

    +
  • +
  • +

    Since only one relation supported currently, the color is sufficient to encode the goal.

    +
  • +
+ +

Benchmark Tiers

+ +
    +
  • +

    Two benchmark tiers are provided where each tier comprises of a combination of:

    + +
      +
    • +

      a predefined set of all the actions that the agent is allowed to perform.

      +
    • +
    • +

      set of tasks that can be solved by at least one action from the allowed action set.

      +
    • +
    +
  • +
  • +

    PHYRE-B - The agent is allowed to place a single (ball of any radii) at any valid location.

    +
  • +
  • +

    PHYRE-2B - The agent is allowed to place 2 balls at any valid pair of locations.

    +
  • +
  • +

    Each of the two tiers has 25 task templates where each template comprises of variants of a single task (same goal but different initial conditions).

    +
  • +
+ +

Evaluation

+ +
    +
  • +

    Two evaluation setups are considered:

    + +
      +
    • +

      within-template where the agent is trained on some tasks in a template and evaluated on a set of held-out tasks from the same template.

      +
    • +
    • +

      cross-template where the agent is evaluated on tasks from a different template.

      +
    • +
    +
  • +
  • +

    In the training phase, the model has access to the simulator (but not to the correct solution). So the model could learn an action-prediction model or forward dynamics model or both.

    +
  • +
  • +

    In the testing phase, the model can query the simulator only a few times. Each query provides it with the binary reward and the intermediate observations.

    +
  • +
+ +

Performance Measure

+ +
    +
  • +

    The emphasis is on solving more tasks (in few queries) during the test phase.

    +
  • +
  • +

    This requirement is captured using a metric called AUCCESS.

    +
  • +
  • +

    In general, the tasks in PHYRE-2B are harder than tasks in PHYRE-B.

    +
  • +
+ +

Baseline Agents

+ +
    +
  • +

    Random Agent - Randomly samples actions

    +
  • +
  • +

    Non-parametric agent (MEM) - generates R actions at random and uses the simulator to check how many tasks can be solved using these R random actions. During testing, try the R actions in the decreasing order of the number of tasks they solve.

    +
  • +
  • +

    Non-parametric agent with online learning (MEM-O) - Variant of MEM where an online adaptation step is performed during test time (to update the rank of the actions).

    +
  • +
  • +

    Deep Q Networks with an action encoder, observation encoder and fusion model (combine action and observation representation).

    +
  • +
  • +

    DQN with online learning (DQN-0): Variant of DQN with online updates (during the test phase).

    +
  • +
  • +

    Contextual bandits.

    +
  • +
  • +

    Policy learning approaches like PPO and A2C.

    +
  • +
+ +

Observations

+ +
    +
  • +

    Both Contextual bandits and policy-based approaches show poor training stability.

    +
  • +
  • +

    The best agent, DQN-O, reaches AUCCESS of 56.2\% on PHYRE-B and 39.26\% on PHYRE-2B. In general, agents with online adaptation perform better.

    +
  • +
  • +

    The tasks are designed such that 100000 attempts are sufficient to solve 100\% of tasks in PHYRE-B and 95\% of tasks in PHYRE-2B.

    +
  • +
  • +

    Even though only two tiers are provided right now, the benchmark is readily extensible and new tasks can be added in the future.

    +
  • +
diff --git a/_site/site/2019/09/05/How-to-train-your-MAML.html b/_site/site/2019/09/05/How-to-train-your-MAML.html new file mode 100644 index 00000000..77fc62ce --- /dev/null +++ b/_site/site/2019/09/05/How-to-train-your-MAML.html @@ -0,0 +1,91 @@ +

Introduction

+ +
    +
  • +

    The paper proposes MAML++ - a modification of MAML algorithm that stabilizes its training improves generalization performance and reduces the computational overhead.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Notes

+ +

Unstable Training

+ +
    +
  • +

    Training the outer loop requires unfolding the inner loop multiple times.

    +
  • +
  • +

    In absence of skip connections, the gradient is multiplied by the same parameter multiple times.

    +
  • +
  • +

    Large depth and absent skip connections could lead to exploding and vanishing gradients respectively.

    +
  • +
  • +

    The paper proposes to stabilize the gradient propagation by minimizing the target set loss computed by the base-network after every step towards a support set task.

    +
  • +
  • +

    It is important to anneal the contribution of earlier steps and increase the contribution of later steps over time.

    +
  • +
+ +

Second Order derivatives are expensive to compute

+ +
    +
  • +

    While the first-order MAML is faster, the resulting model may not have as good a generalization error as the second-order MAML.

    +
  • +
  • +

    The paper proposes to use derivative order annealing where the first order gradients are used for the first 50 epochs and the network uses second-order gradients from thereon.

    +
  • +
  • +

    This derivative order annealing appears to be more stable than models that use second-order derivatives only.

    +
  • +
+ +

Batch Normalization

+ +
    +
  • +

    In MAML, the statistics of the current batch are used for normalization instead of accumulating the running statistics.

    +
  • +
  • +

    The paper proposes to collect the statistics per step which can increase the convergence speed, stability, and generalization performance.

    +
  • +
  • +

    In MAML, the batch normalization biases are not updated in the inner-loop which can adversely impact the performance.

    +
  • +
  • +

    The paper proposes to learn a set of biases (per step) within the inner loop update.

    +
  • +
+ +

Fixed Learning Rate

+ +
    +
  • +

    MAML uses a single learning rate across all the steps and all the parameters. This means there is one single learning rate that needs to be hyperparameter to work well for all the layers and steps.

    +
  • +
  • +

    An alternate solution would be to learn a separate learning rate per parameter but this can be impractical as it doubles the number of parameters to be learned.

    +
  • +
  • +

    The paper proposes to learn a learning rate and direction for each layer in the network, for each step it takes in the inner loop.

    +
  • +
  • +

    The paper also proposed to anneal the learning rate of the outer loop (using cosine annealing) as it helps to achieve better generalization.

    +
  • +
+ +

Results

+ +
    +
  • +

    Using these modifications helps to outperform the MAML model on both Omniglot and MiniImagenet datasets.

    +
  • +
  • +

    The biggest benefit comes by learning the per-layer, per-step learning rates and by using the per-step batch normalization.

    +
  • +
diff --git a/_site/site/2019/09/12/Gossip-based-Actor-Learner-Architectures-for-Deep-RL.html b/_site/site/2019/09/12/Gossip-based-Actor-Learner-Architectures-for-Deep-RL.html new file mode 100644 index 00000000..6b725d8e --- /dev/null +++ b/_site/site/2019/09/12/Gossip-based-Actor-Learner-Architectures-for-Deep-RL.html @@ -0,0 +1,62 @@ +
    +
  • +

    Link to the paper

    +
  • +
  • +

    The paper considers the task of training an RL system by sampling data from multiple simulators (over parallel devices).

    +
  • +
  • +

    The setup is that of distributed RL setting with n agents or actor-learners (composed of a single learner and several actors). These agents are trying to maximize a common value function.

    +
  • +
  • +

    One (existing) approach is to perform on-policy updates with a shared policy. The policy could be updated in synchronous (does not scale well) or asynchronous manner (can be unstable due to stale gradients).

    +
  • +
  • +

    Off policy approaches allow for better computational efficiency but can be unstable during training.

    +
  • +
  • +

    The paper proposed Gossip based Actor-Learner Architecture (GALA) which uses asynchronous communication (gossip) between the n agents to improve the training of Deep RL models.

    +
  • +
  • +

    These agents are expected to converge to the same policy.

    +
  • +
  • +

    During training, the different agents are not required to share the same policy and it is sufficient that the agent’s policies remain $\epsilon$-close to each other. This relaxation allows the policies to be trained asynchronously.

    +
  • +
  • +

    GALA approach is combined with A2C agents resulting in GALA-A2C agents. They have better computational efficiency and scalability (as compared to A2C) and similar in performance to A3C and Impala.

    +
  • +
  • +

    Training alternates between one local policy-gradient (and TD update) and asynchronous gossip between agents.

    +
  • +
  • +

    During the gossip step, the agents send their parameters to some of the other agents (referred to as the peers) and update their parameters based on the parameters received from the other agents (for which the given agent is a peer).

    +
  • +
  • +

    GALA agents are implemented using non-blocking communication so that they can operate asynchronously.

    +
  • +
  • +

    The paper includes the proof that the policies learned by the different agents are within $\epsilon$ distance of each other (ie all the policies lie within an $\epsilon$-distance ball) thus ensuring that the policies do not diverge much from each other.

    +
  • +
  • +

    Six games from the Ataru 2600 games suite are used for the experiments.

    +
  • +
  • +

    Baselines: A2C, A3C, Impala

    +
  • +
  • +

    GALA agents are configured in a directed ring graph topology.

    +
  • +
  • +

    With A2C, as the number of simulators increases, the number of convergent runs (runs with a threshold reward) decreases.

    +
  • +
  • +

    Using gossip algorithms increases or maintains the number of convergent runs. It also improves the performance, sample efficiency and compute efficiency of A2C across all the six games.

    +
  • +
  • +

    When compared to Impala and A3C, GALA-A2C generally outperforms (or performs as well as) those baselines.

    +
  • +
  • +

    Given that the learned policies remain within an $\epsilon$ ball, the agent’s gradients are less correlated as compared to the A2C agents.

    +
  • +
diff --git a/_site/site/2019/11/28/Contrastive-Learning-of-Structured-World-Models.html b/_site/site/2019/11/28/Contrastive-Learning-of-Structured-World-Models.html new file mode 100644 index 00000000..8f4786ca --- /dev/null +++ b/_site/site/2019/11/28/Contrastive-Learning-of-Structured-World-Models.html @@ -0,0 +1,131 @@ +

Introduction

+ +
    +
  • +

    The paper introduces Contrastively-trained Structured World Models (C-SWMs).

    +
  • +
  • +

    These models use a contrastive approach for learning representations in environments with compositional structure.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code.

    +
  • +
+ +

Approach

+ +
    +
  • +

    The training data is in the form of an experience buffer \(B = \{(s_t, a_t, s_{t+1})\}_{t=1}^T\) of state transition tuples.

    +
  • +
  • +

    The goal is to learn:

    + +
      +
    • +

      an encoder \(E\) that maps the observed states $s_t$ (pixel state observations) to latent state $z_t$.

      +
    • +
    • +

      a transition model \(T\) that predicts the dynamics in the hidden state.

      +
    • +
    +
  • +
  • +

    The model defines the enegry of a tuple \((s_t, a_t, s_{t+1})\) as \(H = d(z_t + T(z_t, a_t), z_{t+1})\).

    +
  • +
  • +

    The model has an inductive bias for modeling the effect of action as translation in the abstract state space.

    +
  • +
  • +

    An extra hinge-loss term is added: \(max(0, \gamma - d(z^{~}_{t}, z_{t+1}))\) where \(z^{~}_{t} = E(s^{~}_{t})\) is a corrputed latent state corresponding to a randomly sampled state \(s^{~}_{t}\).

    +
  • +
+ +

Object-Oriented State Factorization

+ +
    +
  • +

    The goal is to learn object-oriented representations where each state embedding is structured as a set of objects.

    +
  • +
  • +

    Assuming the number of object slots to be \(K\), the latent space, and the action space can be factored into \(K\) independent latent spaces (\(Z_1 \times ... \times Z_K\)) and action spaces (\(A_1 \times ... \times A_k\)) respectively.

    +
  • +
  • +

    There are K CNN-based object extractors and an MLP-based object encoder.

    +
  • +
  • +

    The actions are represented as one-hot vectors.

    +
  • +
  • +

    A fully connected graph is induced over K objects (representations) and the transition function is modeled as a Graph Neural Network (GNN) over this graph.

    +
  • +
  • +

    The transition function produces the change in the latent state representation of each object.

    +
  • +
  • +

    The factorization can be taken into account in the loss function by summing over the loss corresponding to each object.

    +
  • +
+ +

Environments

+ +
    +
  • +

    Grid World Environments - 2D shapes, 3D blocks

    +
  • +
  • +

    Atari games - Pong and Space Invaders

    +
  • +
  • +

    3-body physics simulation

    +
  • +
+ +

Setup

+ +
    +
  • +

    Random policy is used to collect the training data.

    +
  • +
  • +

    Evaluation is performed in the latent space (no reconstruction in the pixel space) using ranking metrics. The observations (to compare against) are randomly sampled from the buffer.

    +
  • +
  • +

    Baselines - auto-encoder based World Models and Physics as Inverse Graphics model.

    +
  • +
+ +

Results

+ +
    +
  • +

    In the grid-world environments, C-SWM models the latent dynamics almost perfectly.

    +
  • +
  • +

    Removing either the state factorization or the GNN transition model hurts the performance.

    +
  • +
  • +

    C-SWM performs well on Atari as well but the results tend to have high variance.

    +
  • +
  • +

    The optimal values of $K$ should be obtained by hyperparameter tuning.

    +
  • +
  • +

    For the 3-body physics tasks, both the baselines and proposed models work quite well.

    +
  • +
  • +

    Interestingly, the paper has a section on limitations:

    + +
      +
    • +

      The object extractor module can not disambiguate between multiple instances of the same object (in a scene).

      +
    • +
    • +

      The current formulation of C-SWM can only be used with deterministic environments.

      +
    • +
    +
  • +
diff --git a/_site/site/2019/12/05/Mastering-Atari,-Go,-Chess-and-Shogi-by-Planning-with-a-Learned-Model.html b/_site/site/2019/12/05/Mastering-Atari,-Go,-Chess-and-Shogi-by-Planning-with-a-Learned-Model.html new file mode 100644 index 00000000..95c202e5 --- /dev/null +++ b/_site/site/2019/12/05/Mastering-Atari,-Go,-Chess-and-Shogi-by-Planning-with-a-Learned-Model.html @@ -0,0 +1,121 @@ +

Introduction

+ +
    +
  • +

    The paper presents the MuZero algorithm that performs planning with a learned model.

    +
  • +
  • +

    The algorithm achieves state of the art results on Atari suite (where generally model-free approaches perform the best) and on planning-oriented games like Chess and Go (where generall planning approaches perform the best).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Relation to standard Model-Based Approaches

+ +
    +
  • +

    Model-based approaches generally focus on reconstructing the true environment state or the sequence of full observations.

    +
  • +
  • +

    MuZero focuses on predicting only those aspects that are most relevant for planning - policy, value functions, and rewards.

    +
  • +
+ +

Approach

+ +
    +
  • +

    The model consists of three components: (representation) encoder, dynamics function, and the prediction network.

    +
  • +
  • +

    The learning agent has two kinds of interactions - real interactions (ie the actions that are actually executed in the real environment) and hypothetical or imaginary actions (ie the actions that are executed in the learned model or the dynamics function).

    +
  • +
  • +

    At any timestep t, the past observations o1, … ot are encoded into the state st using the encoder.

    +
  • +
  • +

    Now the model takes hypothetical actions for the next K timesteps by unrolling the model for K steps.

    +
  • +
  • +

    For each timestep k = 1, …, K, the dynamics model predicts the immediate reward rk and a new hidden state hk using the previous hidden state hk-1 and action ak.

    +
  • +
  • +

    At the same time, the policy pk and the value function vk are computed using the prediction network.

    +
  • +
  • +

    The initial hidden state h0 is initialized using the state st

    +
  • +
  • +

    Any MDP Planning algorithm can be used to search for optimal policy and value function given the state transitions and the rewards induced by the dynamics function.

    +
  • +
  • +

    Specifically, the MCTS (Monte Carlo Tree Search) algorithm is used and the action at+1 (ie the action that is executed in the actual environment) is selected from the policy outputted by MCTS.

    +
  • +
+ +

Collecting Data for the Replay Buffer

+ +
    +
  • +

    At each timestep t, the MCTS algorithm is executed to choose the next action (which will be executed in the real environment).

    +
  • +
  • +

    The resulting next observation ot+1 and reward rt+1 are stored and the trajectory is written to the replay buffer (at the end of the episode).

    +
  • +
+ +

Objective

+ +
    +
  • +

    For every hypothetical step k, match the predicted policy, value, and reward to the actual target values.

    +
  • +
  • +

    The target policy is generated by the MCTS algorithm.

    +
  • +
  • +

    The target value function and reward are generated by actually playing the game (or the MDP).

    +
  • +
+ +

Relation to AlphaZero

+ +
    +
  • +

    MuZero leverages the search-based policy iteration from AlphaZero.

    +
  • +
  • +

    It extends AlphaZero to setups with a single agent (where self-play is not possible) and setups with a non-zero reward at the intermediate time steps.

    +
  • +
  • +

    The encoder and the predictions functions are similar to ones used by AlphZero.

    +
  • +
+ +

Results

+ +
    +
  • +

    K is set to 5.

    +
  • +
  • +

    Environments: 57 games in Atari along with Chess, Go and Shogi

    +
  • +
  • +

    MuZero achieves the same level of performance as AlphaZero for Chess and Shogi. In Go, MuZero slightly outperforms AlphaZero despite doing fewer computations per node in the search tree.

    +
  • +
  • +

    In Atari, MuZero achieves a new state-of-the-art compared to both model-based and model-free approaches.

    +
  • +
  • +

    The paper considers a variant called MuZero Reanalyze that reanalyzes old trajectories by re-running the MCTS algorithm with the updated network parameter. The motivation is to have a better sample complexity.

    +
  • +
  • +

    MuZero performs well even when using a single simulation of MCTS (during inference).

    +
  • +
  • +

    During training, using more simulations of MCTS helps to achieve better performance through even just 6 simulations per move is sufficient to learn a good model for Ms. Pacman.

    +
  • +
diff --git a/_site/site/2019/12/12/Everything-Happens-for-a-Reason-Discovering-the-Purpose-of-Actions-in-Procedural-Text.html b/_site/site/2019/12/12/Everything-Happens-for-a-Reason-Discovering-the-Purpose-of-Actions-in-Procedural-Text.html new file mode 100644 index 00000000..c8c3c50a --- /dev/null +++ b/_site/site/2019/12/12/Everything-Happens-for-a-Reason-Discovering-the-Purpose-of-Actions-in-Procedural-Text.html @@ -0,0 +1,204 @@ +

Introduction

+ +
    +
  • +

    Procedural text comprehension tasks focus on modeling the effect of actions and predicting what happens next.

    +
  • +
  • +

    But they do not consider why some actions need to happen before other actions.

    +
  • +
  • +

    The paper proposes a new model called XPAD (eXPlainable Action Dependency) that considers the purpose of actions while predicting their effect.

    +
  • +
  • +

    The model favors effects that:

    + +
      +
    • +

      explain more of actions in the text.

      +
    • +
    • +

      are more plausible given the context.

      +
    • +
    +
  • +
  • +

    An existing procedural text benchmark dataset (Propara) is expanded by adding the task of explaining actions by predicting their dependencies.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the dataset

    +
  • +
+ +

Setup

+ +
    +
  • +

    Input

    + +
      +
    • +

      Procedural (chronologically ordered text) sequence of T sentences.

      +
    • +
    • +

      List of N participant entities, whose state changes at some step.

      +
    • +
    +
  • +
  • +

    Output

    + +
      +
    • +

      State change matrix $\pi(T \times N)$ with four possible states - move, create destroy, none.

      +
    • +
    • +

      This matrix tracks how property changes after each step.

      +
    • +
    +
  • +
  • +

    Dependency Explanation Graph

    + +
      +
    • +

      Identify what steps are necessary to execute a given step (say si) and represent this dependency in the form of a dependency explanation graph G = <S, E>.

      +
    • +
    • +

      In this graph, each node is a step and the direction of edge describes the order of dependency.

      +
    • +
    +
  • +
+ +

Dependency Graph Dataset

+ +
    +
  • +

    Propara dataset is expanded to extract the dependency graph using both heuristic and automated methods.

    +
  • +
  • +

    The automated method is based on the coherence assumption that if step sj changes state of entity ek then sj is a precondition for the first subsequent step that changes the state of ek.

    +
  • +
+ +

XPAD Model

+ +
    +
  • +

    The model is based on the ProStruct system and uses an encoder-decoder based architecture.

    +
  • +
  • +

    Encoder

    + +
      +
    • +

      Input: Sentence st and entity ej.

      +
    • +
    • +

      Sentence is encoded using the GloVe vectors and a BiLSTM model and the entity is encoded as an indicator variable.

      +
    • +
    • +

      The combined representation is denoted as ctj.

      +
    • +
    • +

      This representation is passed through an MLP to generate k logits that encode the probability of each entity j undergoing a state change at step t.

      +
    • +
    +
  • +
  • +

    Decoder

    + +
      +
    • +

      Beam search is performed to decode the encoder representation into the state change matrix and dependency graph using a score function that ensures global consistency.

      +
    • +
    • +

      Score function has two components:

      + +
        +
      • +

        State change score - depends on the likelihood that the selected state changes at step t given the text and state change history from steps s1 to st-1.

        +
      • +
      • +

        Dependency graph score

        + +
          +
        • +

          This is based on the connectivity and likelihood of the resulting dependency explanation graph.

          +
        • +
        • +

          This score is used to bias the graph search towards:

          + +
            +
          • +

            predictions that have an identifiable purpose ie checking if a particular state change prediction leads to a connection in the dependency explanation graph.

            +
          • +
          • +

            graphs that are more likely according to the background knowledge to distinguish likely dependency links from the unlikely ones.

            +
          • +
          +
        • +
        +
      • +
      +
    • +
    +
  • +
  • +

    During training, XPAD has access to the correct path (in the search space) and learns to minimize the joint loss corresponding to predicting the state change and the dependency explanation graph.

    +
  • +
  • +

    During testing, XPAD performs beam search to predict the most likely state change and dependency explanation graph.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Tasks:

    + +
      +
    • +

      State change prediction

      +
    • +
    • +

      Dependency explanation prediction

      +
    • +
    +
  • +
  • +

    Baselines:

    + + +
  • +
  • +

    XPAD significantly outperforms all the baseline models on the dependency explanation task.

    +
  • +
  • +

    Improvements on the state change prediction task are less significant.

    +
  • +
  • +

    Removing dependency graph scores from XPAD leads to a drop in the F1 score.

    +
  • +
  • +

    The paper provides an elaborate discussion on the different types of errors that the XPAD system makes.

    +
  • +
diff --git a/_site/site/2019/12/19/ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations.html b/_site/site/2019/12/19/ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations.html new file mode 100644 index 00000000..9fb5bcda --- /dev/null +++ b/_site/site/2019/12/19/ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations.html @@ -0,0 +1,114 @@ +

Introduction

+ +
    +
  • +

    The paper proposes parameter-reduction techniques to lower the memory consumption (and improve training speed) of BERT.

    +
  • +
  • +

    It also proposes to use a self-supervised loss (based on inter-sentence coherence) and argues that this loss is better than the NSP loss used by BERT.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Architecture

+ +
    +
  • +

    ALBERT architecture is similar to that of BERT with three major differences.

    +
  • +
  • +

    Factorized Embedding Parameterization

    + +
      +
    • +

      In BERT and followup works, the embedding size was tied to the size of the context vector.

      +
    • +
    • +

      Since context vector is expected to encoder the entire context, it needs to have a large dimensionality.

      +
    • +
    • +

      One consequence of this choice is that even the embedding layer (which encodes the representation for each token) has a large size. This increases the overall memory footprint of the model.

      +
    • +
    • +

      The paper proposed to factorize the embedding parameters into two smaller matrics.

      +
    • +
    • +

      The embedding layer learns a low dimensional representation of the tokens and this representation is projected into a high dimensional space.

      +
    • +
    +
  • +
  • +

    Cross-layer parameter sharing

    + +
      +
    • ALBERT shares all the parameters across the layers.
    • +
    +
  • +
  • +

    Inter-sentence coherence loss

    + +
      +
    • +

      BERT uses two losses - Masked Language Modeling loss (MLM) and Next Sentence Prediction (NSP).

      +
    • +
    • +

      In the NSP task, the model is provided a pair of sentences and it has to predict if the two sentences appear consecutively in the same document or not. Negative samples are created by sampling sentences from different documents.

      +
    • +
    • +

      The paper argues that NSP is not effective as a loss function as it merges topic prediction and coherence prediction into one task (as the two sentences come from different documents). The topic prediction is an easier task as compared to coherence prediction.

      +
    • +
    • +

      Hence the paper proposes to use the Sentence Order Prediction task where the model has to predict which of the two sentences comes first in a document. The negative samples are created by simply swapping the order in the positive samples. Hence both the sentences come from the same document and topic prediction alone can not be used to solve the task.

      +
    • +
    +
  • +
+ +

Setup

+ +
    +
  • +

    Different variants (in terms of size) of ALBERT and BERT models are compared (eg ALBERT, ALBERT-x, BERT-x, etc).

    +
  • +
  • +

    In general, ALBERT models have many-times fewer parameters as compared to the BERT models.

    +
  • +
  • +

    Datasets - BookCorpus, English Wikipedia.

    +
  • +
+ +

Observations

+ +
    +
  • +

    ALBERT-xxlarge significantly outperforms the BERT-large model even though it has around 70% parameters as the BERT-large model.

    +
  • +
  • +

    BERT-xlarge performs worse than BERT-base hinting that it is difficult to train such large models.

    +
  • +
  • +

    ALBERT models also have better data throughput as compared to BERT models.

    +
  • +
  • +

    For the ALBERT models, an embedding size of 128 performs the best.

    +
  • +
  • +

    As the hidden dimension is increased, the model obtains better performance, but with diminishing returns.

    +
  • +
  • +

    Very wide ALBERT models (say with a context size of 1024) do not benefit much from depth.

    +
  • +
  • +

    Using additional training data boosts the performance for most of the downstream tasks.

    +
  • +
  • +

    The paper empirically shows that using dropout could hurt the performance of the ALBERT models. This observation may not hold for BERT as it does not share parameters across layers and hence may need regularization via dropout.

    +
  • +
  • +

    ALBERT also improves the state of the art performance on GLUE, SQuAD and RACE benchmarks, for both single-model and ensemble setup.

    +
  • +
+ diff --git a/_site/site/2019/12/26/Towards-a-Unified-Theory-of-State-Abstraction-for-MDPs.html b/_site/site/2019/12/26/Towards-a-Unified-Theory-of-State-Abstraction-for-MDPs.html new file mode 100644 index 00000000..7ea43cfd --- /dev/null +++ b/_site/site/2019/12/26/Towards-a-Unified-Theory-of-State-Abstraction-for-MDPs.html @@ -0,0 +1,124 @@ +

Introduction

+ +
    +
  • +

    The paper studies five different techniques for stat abstraction in MDPs (Markov Decision Processes) and evaluates their usefulness for planning and learning.

    +
  • +
  • +

    The general idea behind abstraction is to map the actual (or observed) state to an abstract state that should be more amenable for learning.

    +
  • +
  • +

    It can be thought of as a mapping from one representation to another representation while preserving some useful properties.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

General Definition

+ +
    +
  • +

    Consider a MDP \(M = <S, A, P, R, \gamma>\) where \(S\) is the finite set of states, \(A\) is finite set of actions, \(P\) is the transition function, \(R\) is the bounded reward function and \(\gamma\) is the discount factor.

    +
  • +
  • +

    The abstract version of the MDP is \(\widetilde{M} = <\widetilde{S}, A, \widetilde{P}, \widetilde{R}, \gamma>\) where \(\widetilde{S}\) is the finite set if abstract states, \(\widetilde{P}\) is the transition function in the abstract state space and \(\widetilde{R}\) is the bounded reward function in the abstract reward space.

    +
  • +
  • +

    Abstraction function \(\phi\) is a function that maps a given state \(s\) to its abstract counterpart \(\widetilde{s}\).

    +
  • +
  • +

    The inverse image \(\phi^{-1}(\widetilde{s})\) is the set of ground states that map to the \(\widetilde{s}\) under the abstraction function \(\phi\).

    +
  • +
  • +

    A wieghing functioon \(w(s)\) is used to measure how much does a state \(s\) contribute to the abstract state \(\phi(s)\).

    +
  • +
+ +

Topology of Abstraction Space

+ +
    +
  • +

    Given two abstraction functions \(\phi_{1}\) and \(\phi_{2}\), \(\phi_{1}\) is said to be finer than \(\phi_{2}\) iff for any states \(s_{1}, s_{2}\) if \(\phi_{1}(s_{1}) = \phi_{1}(s_{2})\) then \(\phi_{2}(s_{1}) = \phi_{2}(s_{2})\).

    +
  • +
  • +

    This finer relation is reflex, antisymmetric, transitive and partially ordered.

    +
  • +
+ +

Five Types of Abstraction

+ +
    +
  • +

    While many abstractions are possible, not all abstractions are equally important.

    +
  • +
  • +

    Model-irrelevance abstraction \(\phi_{model}\):

    + +
      +
    • +

      If two states $s_{1}$ and $s_{2}$ have the same abstracted state, then their one-step model is preserved.

      +
    • +
    • +

      Consider any action \(a\) and any abstract state \(\widetilde{s}\), if \(\phi_{model}(s_{1} = \phi_{model}(s_{2})\) then \(R(s_1, a) = R(s_2, a)\) and \(\sum_{s' \in \phi_{model}^{-1}\widetilde(s)}P_{s_1, s'}^{a} = \sum_{s' \in \phi_{model}^{-1}\widetilde(s)}P_{s_2, s'}^{a}\).

      +
    • +
    +
  • +
  • +

    \(Q^{\pi}\)-irrelevance abstraction:

    + +
      +
    • +

      It preserves the state-action value finction for all the states.

      +
    • +
    • +

      \(\phi_{Q^{\pi}}(s_1) = \phi_{Q^{\pi}}(s_2)\) implies \(Q^{\pi}(s_1, a) = Q^{\pi}(s_1, a)\).

      +
    • +
    +
  • +
  • +

    \(Q^{*}\)-irrelevance abstraction:

    + +
      +
    • It preserves the optimal state-action value function.
    • +
    +
  • +
  • +

    \(a^{*}\)-irrelevance abstraction:

    + +
      +
    • It preserves the optimal action and its value function.
    • +
    +
  • +
  • +

    \(\phi_{\pi^{*}}\)-irrelevance abstraction:

    + +
      +
    • It preserves the optimal action.
    • +
    +
  • +
  • +

    In terms of fineness, \(\phi_0 \geq \phi_{model} \geq \phi_{Q^{\pi}} \geq \phi_{Q^*} \geq \phi_{a^*} \geq \phi_{\pi^*}\). Here \(\phi_0\) is the identity mapping ie \(\phi_0(s) = s\)

    +
  • +
  • +

    If a property applies to any abstraction, it also applies to all the finer abstractions.

    +
  • +
+ +

Key Theorems

+ +
    +
  • +

    As we go from finer to coarser abstractions, the information loss increases (ie fewer components can be recovered) while the state-space reduces (ie the efficiency of solving the problem increases). This leads to a tradeoff when selecting abstractions.

    +
  • +
  • +

    For example, with abstractions \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}, \phi_{a^*}\), the optimal abstract policy \(\widetilde(\pi)^*\) is optimal in the ground MDP.

    +
  • +
  • +

    Similarly, if each state-action pair is visited infinitely often and the step-size decays properly, Q-learning with \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}\) converges to the optimal state-action value functions in the MDP. More conditions are needed for convergence in the case of the remaining two abstractions.

    +
  • +
  • +

    For \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}, \phi_{a^*}\), the model built with the experience converges to the true abstract model with infinite experience if the weighing function \(w(s)\) is fixed.

    +
  • +
+ diff --git a/_site/site/2020/01/02/Superposition-of-many-models-into-one.html b/_site/site/2020/01/02/Superposition-of-many-models-into-one.html new file mode 100644 index 00000000..4169e198 --- /dev/null +++ b/_site/site/2020/01/02/Superposition-of-many-models-into-one.html @@ -0,0 +1,185 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a technique (called Parameter Superposition or PSP) for training and storing multiple models within a single set (or instance) of parameters.

    +
  • +
  • +

    The different models exist in “superposition” and can be retrieved dynamically given task-specific context information.

    +
  • +
  • +

    Link to the paper.

    +
  • +
+ +

Parameter Substitution

+ +
    +
  • +

    Consider a task with input \(x \in R^N\) and parameter \(W$ \in R^{M \times N}\) where the output (target or features) are given as \(y=Wx\).

    +
  • +
  • +

    Now consider \(K\) such tasks with parameters \(W_1, W_2, \cdots W_K\).

    +
  • +
  • +

    If each \(W_k\) requires only a small subspace in \(R^N\), then a linear transformation \(C_k^{-1}\) can be used such that each \(W_kC_k^{-1}\) occupies a mutually orthogonal subspace in \(R^N\).

    +
  • +
  • +

    The set of parameters \(W_1, \cdots W_K\) can be represented by a single \(W^{M \times N}\) by adding \(W_kC_k^{-1}\).

    +
  • +
  • +

    The parameter corresponding to the \(k^{th}\) task can be retrived (with some noise) using the context \(C_k\) as \(W^{~}_k = WC_k\)

    +
  • +
  • +

    Even though the retrieval is noisy, the effect of noise is limited for the context vectors used in the paper.

    +
  • +
  • +

    Finally, \(\widetilde(y) = \widetilde(W)_{k}x = (WC_{k})x = W(C_{k}x)\)

    +
  • +
  • +

    Instead of learning \(K\) separate models, only \(K\) context vectors (along with 1 superimposed model) needs to be learned.

    +
  • +
  • +

    The key assumption is that \(N\) (in \(x \in R^N)\) is large enough such that each \(W_k\) requires only a small subspace of \(R^N\).

    +
  • +
  • +

    Since images and speech signals tend to occupy a low dimensional manifold, this requirement can be satisfied by over-parameterizing x.

    +
  • +
+ +

Choice of Context C

+ +
    +
  • +

    Rotational Superposition (pspRotation)

    + +
      +
    • +

      Sample rotations uniformly from the orthogonal group \(O(M)\).

      +
    • +
    • +

      Downside is that if \(M \sim N\), it requires storing as many parameters as learning \(K\) individual models (since \(C\) is of the size of ##M \times M$$).

      +
    • +
    +
  • +
  • +

    Complex Superposition (pspComplex)

    + +
      +
    • +

      The design of rotational superposition can be improved by choosing \(C_k\) to be a diagonal matrix ie \(C_k = diag(c_k)\) where \(c_k\) is a vector of size \(M\).

      +
    • +
    • +

      Choosing \(c_k\) to be a vector of complex numbers (of the form \(c_{k}^{j} = e^{i\phi_{j}(k)}\) where \(\phi_{j}(k)\) or the phase is sampled uniformly from \([-\pi, \pi]\)) leads to \(C_k\) being a digonal orthogonal matrix.

      +
    • +
    +
  • +
  • +

    Powers of a single context

    + +
      +
    • The memory footprint can be further reduced by choosing the context vectors to be integral powers of the first context vector.
    • +
    +
  • +
  • +

    Binary Superposition (pspBinary)

    + +
      +
    • This is a special case of complex superposition where the context vectors are binary.
    • +
    +
  • +
+ +

Neural Network Superposition

+ +
    +
  • +

    The parameter superposition principle can be applied to all the linear layers of a network.

    +
  • +
  • +

    For the convolutional layers, it makes more sense to apply superposition to the convolutional kernel and not to the input image (as the dimensionality of convolutional parameters is smaller than that of inputs).

    +
  • +
+ +

Experiments

+ +
    +
  • +

    For all the experiments, the baseline is a standard supervised learning setup, unless mentioned otherwise.

    +
  • +
  • +

    The metric is the performance on the previous tasks when the model has been trained on the newer tasks.

    +
  • +
  • +

    Input Interference

    + +
      +
    • +

      The input distribution changes over time.

      +
    • +
    • +

      Permuted MNIST dataset is used where each permutation of the pixels corresponds to a new task.

      +
    • +
    • +

      A new task is sampled every 1000 mini-batches.

      +
    • +
    • +

      As the network size increases, the performance of Parameter Superposition (psp) outperforms the baseline significantly.

      +
    • +
    • +

      pspRotation > pspComplex > pspBinary in terms of both performance and the number of additional parameters required for each new task.

      +
    • +
    • +

      Given that pspBinary is the easiest to implement while being comparable to more sophisticated baselines like Elastic Weight Consolidation (EWC) and Synaptic Intelligence, the paper presents most of the results with the pspBinary model.

      +
    • +
    +
  • +
  • +

    Continous Domain Shift

    + +
      +
    • +

      Rotating-MNIST and Rotating-FashionMNIST tasks are proposed to simulate continuous domain shift.

      +
    • +
    • +

      In these tasks, the input images are rotated in-plane by a small angle such that the rotation is complete after 1000 steps.

      +
    • +
    • +

      A new context is assigned after 100 steps as per step changes in the angle would be very small.

      +
    • +
    • +

      The 10 context vectors used in the first 1000 steps are reused for the subsequent steps.

      +
    • +
    +
  • +
  • +

    Randomly changing the context vector

    + +
      +
    • +

      The paper considers an ablation where the context vector is randomly changed at every step (of the 1000 step cycle). This required the superposition model to store 1000 models.

      +
    • +
    • +

      This approach is better than the supervised learning baseline but not as good as the proposed psp* models.

      +
    • +
    +
  • +
  • +

    Output Interference

    + +
      +
    • +

      This is the setup where the model transitions from one classification task to another.

      +
    • +
    • +

      Incremental CIFAR dataset is used with Resnet18 as the base model.

      +
    • +
    • +

      Baseline is a standard supervised learning model where a new classification head is used for each task (since the classes have a different meaning in each dataset). The model component before the classification layer is shared across the tasks.

      +
    • +
    • +

      Even though the labels are different across the datasets, the pspBinary model, trained with a single output layer, outperforms the multi-headed baseline.

      +
    • +
    +
  • +
diff --git a/_site/site/2020/01/09/Accurate-Large-Minibatch-SGD-Training-ImageNet-in-1-Hour.html b/_site/site/2020/01/09/Accurate-Large-Minibatch-SGD-Training-ImageNet-in-1-Hour.html new file mode 100644 index 00000000..81179b39 --- /dev/null +++ b/_site/site/2020/01/09/Accurate-Large-Minibatch-SGD-Training-ImageNet-in-1-Hour.html @@ -0,0 +1,107 @@ +

Introduction

+ +
    +
  • +

    Training models with large minibatches (using distributed synchronous SGD) can lead to optimization issues.

    +
  • +
  • +

    The paper presents techniques for training models with large batch size while matching the accuracy of small minibatch setups.

    +
  • +
  • +

    The paper focuses on the ImageNet dataset, but many of the proposed ideas are applicable broadly.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Linear Scaling Rule

+ +
    +
  • +

    When the minibatch size increases by a factor of k, the learning rate should also be increased by a factor of k (while keeping all other hyperparameters like weight decay fixed).

    +
  • +
  • +

    Note that this is an empirical rule and is not expected to hold under all conditions.

    +
  • +
  • +

    One such condition is when the model is changing rapidly during the first few epochs. In this case, a warmup phase is introduced to stabilize the model.

    +
  • +
  • +

    The paper verifies that the scaling rule is applicable to batch sizes as large as 8K.

    +
  • +
+ +

Warmup

+ +
    +
  • The learning rate should be gradually ramped up from a small value to a large value to allow convergence.
  • +
+ +

Batch Normalization

+ +
    +
  • +

    Batch normalization uses batch statistics to normalize the data. Hence, the loss corresponding to each data point (in the batch) is not independent. Thus, changing the batch size could change the underlying function being optimized.

    +
  • +
  • +

    In the distributed SGD setup, the per-GPU (or per-worker) batch size should be kept constant, and only one worker should compute the batch norm statistics.

    +
  • +
+ +

Pitfalls when using distributed SGD

+ +
    +
  • +

    When using weight decay, scaling the cross-entropy loss is not the same as scaling the learning rate.

    +
  • +
  • +

    When using momentum, changing the learning rate could require “momentum correction.”

    +
  • +
  • +

    Ensure that the per-worker loss is normalized by the size of the total minibatch and not just by the size of minibatch that each worker sees.

    +
  • +
  • +

    For each epoch, uses a single random shuffling of the training data (before dividing between the workers).

    +
  • +
+ +

Communication

+ +
    +
  • +

    The paper describes various techniques to speed up the training pipeline by reducing the communication overhead between nodes. (Each node can have one or more GPUs).

    +
  • +
  • +

    First, a node sums the gradient from all the GPUs it has.

    +
  • +
  • +

    The gradients are shared and summed across all the nodes.

    +
  • +
  • +

    Each node broadcasts the resulting gradient to all the GPUs it has.

    +
  • +
  • +

    Gradient Aggregation is performed in parallel with the backpropagation operator. While aggregating the gradient for one layer, the system starts computing the gradient of the next layer.

    +
  • +
+ +

Results

+ +
    +
  • +

    Using these approaches, a Resnet50 model can be trained on the ImageNet dataset in an hour (using 256 workers).

    +
  • +
  • +

    When an appropriate warmup strategy is used, the training and the validation curves (for the large batch size setup) matches the corresponding curves for the small batch size setup.

    +
  • +
  • +

    The best performing warmup strategy is the one where training starts at a learning rate of 0.1 and linearly increases to 3.2 over five epochs.

    +
  • +
  • +

    The paper shows that the results are not specific to the Resnet50 model (experiments with Resnet101 model) or the use case (experiments with object detection and instance segmentation using Mask R-CNN).

    +
  • +
  • +

    Along with providing the empirical validation of the proposed ideas, the paper describes all the hyperparameters. It also includes the training and validation curves with the different configurations which enable others to replicate and build on this work.

    +
  • +
diff --git a/_site/site/2020/01/16/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML.html b/_site/site/2020/01/16/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML.html new file mode 100644 index 00000000..1cbb6ec7 --- /dev/null +++ b/_site/site/2020/01/16/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML.html @@ -0,0 +1,141 @@ +

Introduction

+ +
    +
  • +

    The paper investigated two possible reasons behind the usefulness of MAML algorithm:

    + +
      +
    • +

      Rapid Learning - Does MAML learn features that are amenable for rapid learning?

      +
    • +
    • +

      Feature Reuse - Does the MAML initialization provide high-quality features that are useful for the unseen tasks.

      +
    • +
    +
  • +
  • +

    This leads to a follow-up question: how much task-specific inner loop adaptation is needed.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    In a standard few-shot learning setup, the different datasets have different classes. Hence, the top-most layer (or the head) of the learning model should be different for different tasks.

    +
  • +
  • +

    The subsequent discussion only applies to the body of the network (ie, network minus the head).

    +
  • +
  • +

    Freezing Layer Representations

    + +
      +
    • +

      In this setup, a subset (or all) of parameters are frozen (after MAML training) and are not adapted during the representation.

      +
    • +
    • +

      Even when the entire network is frozen, the performance drops only marginally.

      +
    • +
    • +

      This indicates that the representation learned by the meta-initialization is good enough to be useful on the test tasks (without requiring any adaptation step).

      +
    • +
    • +

      Note that the head of the network is still adapted during testing.

      +
    • +
    +
  • +
  • +

    Representational Similarity

    + +
      +
    • +

      In this setup, the paper reports the change in the latent representation (learned by the network) during the inner loop update with a fully trained model.

      +
    • +
    • +

      Canonical Correlation Analysis (CCA) and Central Kernel Alignment (CKA) metrics are used to measure the similarity between the representations.

      +
    • +
    • +

      The main finding is that the representations in the body of the network are very similar before and after the inner loop updates while the representations in the head of the network are very different.

      +
    • +
    +
  • +
  • +

    The above two observations indicate that feature reuse is the primary driving factor for the success of MAML.

    +
  • +
  • +

    When does feature reuse happen

    + +
      +
    • +

      The paper considers the model at different stages of training and compares the similarity in the representation (before and after the inner loop update).

      +
    • +
    • +

      Even early in training, the CCA similarity between the representations (before and after the inner loop update) is quite high. Similarly, freezing the layers (for the test time update), early in training, does not degrade the test time performance much. This hints that the feature reuse happens early in the learning process.

      +
    • +
    +
  • +
+ +

The ANIL (Almost No Inner Loop) Algorithm

+ +
    +
  • +

    The empirical evidence suggests that the success of MAML lies in the feature reuse.

    +
  • +
  • +

    The authors build on this observation and propose a simplification of the MAML algorithm: ANIL or Almost No Inner Loop Algorithm

    +
  • +
  • +

    In this algorithm, the inner loop updates are applied only to the head of the network.

    +
  • +
  • +

    Despite being much more straightforward, the performance of ANIL is close to the performance of MAML for both few-shot image classification and RL tasks.

    +
  • +
  • +

    Removing most of the inner loop parameters speed up the computation by a factor of 1.7 (during training) and 4.1 (during inference).

    +
  • +
+ +

Removing the Inner Loop Update

+ +
    +
  • +

    Given that it is possible to remove most of the parameters from the inner loop update (without affecting the performance), the next step is to check if the inner loop update can be removed entirely.

    +
  • +
  • +

    This leads to the NIL (No Inner Loop) algorithm, which does not involve any inner loop adaptation steps.

    +
  • +
+ +

Algorithm

+ +
    +
  • +

    A few-shot learning model is trained - either with MAML or ANIL.

    +
  • +
  • +

    During testing, the head is removed.

    +
  • +
  • +

    For each task, the K training examples are fed to the body to obtain class representations.

    +
  • +
  • +

    For a given test data point, the representation of the data point is compared with the different class representations to obtain the target class.

    +
  • +
  • +

    The NIL algorithm performs similar to the MAML and the ANIL algorithms for the few-shot image classification task.

    +
  • +
  • +

    Note that it is still important to use MAML/ANIL during training, even though the learned head is not used during evaluation.

    +
  • +
+ +

Conclusion

+ +
    +
  • The paper discusses the different classes of meta-learning approaches. It concludes with the observation that feature reuse (and not rapid adaptation) seems to be the common model of operation for both optimization-based meta-learning (e.g., MAML) and model-based meta-learning.
  • +
diff --git a/_site/site/2020/01/23/Observational-Overfitting-in-Reinforcement-Learning.html b/_site/site/2020/01/23/Observational-Overfitting-in-Reinforcement-Learning.html new file mode 100644 index 00000000..75c9bc7f --- /dev/null +++ b/_site/site/2020/01/23/Observational-Overfitting-in-Reinforcement-Learning.html @@ -0,0 +1,134 @@ +

Introduction

+ +
    +
  • +

    The paper studies observational overfitting: The phenomenon where an agent overfits to different observation spaces even though the underlying MDP remains fixed.

    +
  • +
  • +

    Unlike other works, the “background information” (in the pixel space) is correlated with the progress of the agent (and is not just noise).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Base MDP $M = (S, A, R, T)$ where $S$ is the state space, $A$ is the action space, $R$ is the reward function, and $T$ is the transition dynamics.

    +
  • +
  • +

    $M$ is parameterized using $\theta$. In practice, it means introducing an observation function $\phi_{\theta}$ ie $M_{\theta} = (M, \phi_{\theta})$.

    +
  • +
  • +

    A distribution over $\theta$ defines a distribution over the MDPs.

    +
  • +
  • +

    The learning agent has access to the pixel space observations and not the state space observations.

    +
  • +
  • +

    Generalization gap is defined as $J_{\theta}(\pi) - J_{\theta^{train}}(\pi)$ where $\pi$ is the learning agent, $\theta$ is the distribution over all the observation functions, $\theta^{train}$ is the distribution over the observation functions corresponding to the training environments. $J_{\theta}(\pi)$ is the average reward that the agent obtains over environments sampled from $M_{\theta}$.

    +
  • +
  • +

    $\phi_{\theta}$ considers two featurs - generalizable (invariant across $\theta$) and non-generalizable (depends on $\theta$) ie $\phi_{\theta}(s) = concat(f(s), g_{\theta}(s))$ where $f$ is the invariant function and $g$ is the non-generalizable function.

    +
  • +
  • +

    The problem is set up such that “explicit regularization” can easily solve it. The focus is on understanding the effect of “implicit regularization”.

    +
  • +
+ +

Experiments

+ +

Overparameterized LQR

+ +
    +
  • +

    LQR is used as a proxy for deep RL architectures given its advantages like enabling exact gradient descent.

    +
  • +
  • +

    The functions are parameterized as follows:

    + +
      +
    • +

      $f(s) = W_c(s)$

      +
    • +
    • +

      $g_{\theta}(s) = W_{\theta}(s)$

      +
    • +
    +
  • +
  • +

    Observation at time $t$ , $o_t$, is given as $[W_c W_{\theta}]^{-1} s_t$.

    +
  • +
  • +

    Action at time $t$ is given as $a_t = K o_{t}$ where $K$ is the policy matrix.

    +
  • +
  • +

    Dimensionality:

    + +
      +
    • state $s$: $d_{state}$ 100
    • +
    • $f(s)$: $d_{state}$ 100
    • +
    • $g_{\theta}(s)$: $d_{noise}$ 100
    • +
    • observation $o$: $d_{state}$ + $d_{noise}$ 1100
    • +
    +
  • +
  • +

    In case of training on just one environment, multiple solutions exist, and overfitting happens.

    +
  • +
  • +

    Increasing $d_{noise}$ increases the generalization gap.

    +
  • +
  • +

    Overparameterizing the network decreases the generalization gap and also reduces the norm of the policy.

    +
  • +
+ +

Projected Gym Environments

+ +
    +
  • +

    The base MDP is the Gym Environment.

    +
  • +
  • +

    $M_{\theta}$ is generated as before.

    +
  • +
  • +

    Increasing both width and depth for basic MLPs improves generalization.

    +
  • +
  • +

    Generalization also depends on the choice of activation function, residual layers, etc.

    +
  • +
+ +

Deconvolutional Projections

+ +
    +
  • +

    In the Gym environment, the actual state is projected to a larger vector and reshaped into an 84x84 tensor (image).

    +
  • +
  • +

    The image from $f$ is concatenated with the image from $g$. This setup is referred to as the Gym-Deconv.

    +
  • +
  • +

    The relative order of performance between NatureCNN, IMPALA, and IMPALA-Large (on both CoinRun and Gym-Deconv) is the same as the order of the number of parameters they contain.

    +
  • +
  • +

    In an ablation, the policy is given access to only $g_{\theta}(s)$, which makes it impossible for the model to generalize. In this test of memorization capacity, implicit regularization seems to reduce the memorization effect.

    +
  • +
+ +

Overparameterization in CoinRun

+ +
    +
  • +

    The pixel space observation in CoinRun is downsized from 64x64 to 32x32 and flattened into a vector.

    +
  • +
  • +

    In CoinRun, the dynamics change per level, and the noisy “irrelevant” features change location across the 1D input, making this setup more challenging than the previous ones.

    +
  • +
  • +

    Overparameterization improves generalization in this scenario as well.

    +
  • +
diff --git a/_site/site/2020/01/30/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges.html b/_site/site/2020/01/30/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges.html new file mode 100644 index 00000000..5527a04c --- /dev/null +++ b/_site/site/2020/01/30/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges.html @@ -0,0 +1,179 @@ +

Introduction

+ +
    +
  • +

    The paper proposes to build a universal neural machine translation system that can translate between any pair of languages.

    +
  • +
  • +

    As a concrete instance, the paper prototypes a system that handles 103 languages (25 Billion translation pairs).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Why universal Machine Translation

+ +
    +
  • +

    Hypothesis: The learning signal from one language should benefit the quality of other languages1

    +
  • +
  • +

    This positive transfer is evident for low resource languages but tends to hurt the performance for high resource languages.

    +
  • +
  • +

    In practice, adding new languages reduces the effective per-task capacity of the model.

    +
  • +
+ +

Desiderata for Multilingual Translation Model

+ +
    +
  • +

    Maximize the number of languages within one model.

    +
  • +
  • +

    Maximize the positive transfer to low resource languages.

    +
  • +
  • +

    Minimize the negative interference to high resource languages.

    +
  • +
  • +

    Perform well ion the realistic, multi-domain settings.

    +
  • +
+ +

Datasets

+ +
    +
  • +

    In-house corpus generated by crawling and extracting parallel sentences from the web.

    +
  • +
  • +

    102 languages, with 25 billion sentence pairs.

    +
  • +
  • +

    Compared with the existing datasets, this dataset is much larger, spans more domains, has a good variation in the amount of data available for different language pairs, and is noisier. These factors bring additional challenges to the universal NMT setup.

    +
  • +
+ +

Baselines

+ +
    +
  • +

    Dedicated Bilingual models (variants of Transformers).

    +
  • +
  • +

    Most bilingual experiments used Transformer big and a shared source-target sentence-piece model (SPE).

    +
  • +
  • +

    For medium and low resource languages, the Transformer Base was also considered.

    +
  • +
  • +

    Batch size of 1 M tokes per-batch. Increasing the batch size improves model quality and speeds up convergence.

    +
  • +
+ +

Effect of Transfer and Interference

+ +
    +
  • +

    The paper compares the following two setups with the baseline:

    + +
      +
    • +

      Combine all the datasets and train over them as if it is a single dataset.

      +
    • +
    • +

      Combine all the datasets but upsample low resource languages so all that all the languages are equally likely to appear in the combined dataset.

      +
    • +
    +
  • +
  • +

    A target “index” is prepended with every input sentence to indicate which language it should be translated into.

    +
  • +
  • +

    Shared encoder and decoder are used across all the language pairs.

    +
  • +
  • +

    The two setups use a batch size of 4M tokens.

    +
  • +
+ +

Results

+ +
    +
  • +

    When all the languages are equally sampled, the performance on the low resource languages increases, at the cost of performance on high resource languages.

    +
  • +
  • +

    Training over all the data at once reverse this trend.

    +
  • +
+ +

Countering Interference

+ +
    +
  • +

    Temperature based sampling strategy is used to control the ratio of samples from different language pairs.

    +
  • +
  • +

    A balanced sampling strategy improves the performance for the high resource languages (though not as good as the multilingual baselines) while retaining the high transfer performance on the low resource languages.

    +
  • +
  • +

    Another reason behind the lagging performance (as compared to bilingual baselines) is the capacity of the multilingual models.

    +
  • +
  • +

    Some open problems to consider:

    + +
      +
    • +

      Task Scheduling - How to decide the order in which different language pairs should be trained.

      +
    • +
    • +

      Optimization for multitask learning - How to design optimizer, loss functions, etc. that can exploit task similarity.

      +
    • +
    • +

      Understanding Transfer:

      + +
        +
      • +

        For the low resource languages, translating multiple languages to English leads to improved performance than translating English to multiple languages.

        +
      • +
      • +

        This can be explained as follows: In the first case (many-to-one), the setup is that of a multi-domain model (each source language is a domain). In the second case (one-to-many), the setup is that of multitasking.

        +
      • +
      • +

        NMT models seem to be more amenable to transfer across multiple domains than transfer across tasks (since the decoder distribution does not change much).

        +
      • +
      • +

        In terms of zero-shot performance, the performance for most language pairs increases as the number of languages change from 10 to 102.

        +
      • +
      +
    • +
    +
  • +
+ +

Effect of preprocessing and vocabulary

+ +
    +
  • +

    Sentence Piece Model (SPM) is used.

    +
  • +
  • +

    Temperature sampling is used to sample vocabulary from different languages.

    +
  • +
  • +

    Using smaller vocabulary (and hence smaller sub-word tokens) perform better for low resource languages, probably due to improved generalization.

    +
  • +
  • +

    Low and medium resource languages tend to perform better with higher temperatures.

    +
  • +
+ +

Effect of Capacity

+ +
    +
  • Using deeper models improves performance (as compared to the wider models with the same number of parameters) on most language pairs.
  • +
diff --git a/_site/site/2020/02/06/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One.html b/_site/site/2020/02/06/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One.html new file mode 100644 index 00000000..f9de9301 --- /dev/null +++ b/_site/site/2020/02/06/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One.html @@ -0,0 +1,112 @@ +

Introduction

+ +
    +
  • +

    The paper proposed a framework for joint modeling of labels and data by interpreting a discriminative classifier p(y|x) as an energy-based model p(x, y).

    +
  • +
  • +

    Joint modeling provides benefits like improved calibration (i.e., the predictive confidence should align with the miss classification rate), robustness, and out of order distribution.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Motivation

+ +
    +
  • +

    Consider a standard classifier $f_{\theta}(x)$ which produces a k-dimensional vector of logits.

    +
  • +
  • +

    $p_{\theta}(y | x) = softmax(f_{\theta}(x)[y])$

    +
  • +
  • +

    Uisng concepts from energy based models, we write $p_{\theta}(x, y) = \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}$ where $E_{\theta}(x, y) = -f_{\theta}(x)[y]$

    +
  • +
  • +

    $p_{\theta}(x) = \sum_{y}{ \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}}$

    +
  • +
  • +

    $E_{\theta}(x) = -LogSumExp_y(f_{\theta}(x)[y])$

    +
  • +
  • +

    Note that in the standard discriminative setup, shiting the logits $f_{\theta}(x)$ does not affect the model but it affects $p_{\theta}(x)$.

    +
  • +
  • +

    Computing $p_{\theta}(y | x)$ using $p_{\theta}(x, y)$ and $p_{\theta}(x)$ gives back the same softmax parameterization as before.

    +
  • +
  • +

    This reinterpreted classifier is referred to as a Joint Energy-based Model (JEM).

    +
  • +
+ +

Optimization

+ +
    +
  • +

    The log-liklihood of the data can be factoized as $log p_{\theta}(x, y) = log p_{\theta}(x) + log p_{\theta}(y | x)$.

    +
  • +
  • +

    The second factor can be trained using the standard CE loss. In contrast, the first factor can be trained using a sampler based on Stochastic Gradient Langevin Dynamics.

    +
  • +
+ +

Results

+ +

Hybrid Modelling

+ +
    +
  • +

    Datasets: CIFAR10, CIFAR100, SVHN.

    +
  • +
  • +

    Metrics: Inception Score, Frechet Inception Distance

    +
  • +
  • +

    JEM outperforms generative, discriminative, and hybrid models on both generative and discriminative tasks.

    +
  • +
+ +

Calibration

+ +
    +
  • +

    A calibrated classifier is the one where the predictive confidence aligns with the misclassification rate.

    +
  • +
  • +

    Dataset: CIFAR100

    +
  • +
  • +

    JEM improves calibration while retaining high accuracy.

    +
  • +
+ +

Out of Distribution (OOD) Detection

+ +
    +
  • +

    One way to detect OOD samples is to learn a density model that assigns a higher likelihood to in-distribution examples and lower likelihood to out of distribution examples.

    +
  • +
  • +

    JEM consistently assigns a higher likelihood to in-distribution examples.

    +
  • +
  • +

    The paper also proposes an alternate metric called approximate mass to detect OOD examples.

    +
  • +
  • +

    The intuition is that a point could have likelihood but be impossible to sample because its surroundings have a very low density.

    +
  • +
  • +

    On the other hand, the in-distribution data points would lie in a region of high probability mass.

    +
  • +
  • +

    Hence the norm of the gradient of log density could provide a useful signal to detect OOD examples.

    +
  • +
+ +

Robustness

+ +
    +
  • JEM is more robust to adversarial attacks as compared to discriminative classifiers.
  • +
diff --git a/_site/site/2020/02/13/Gradient-based-sample-selection-for-online-continual-learning.html b/_site/site/2020/02/13/Gradient-based-sample-selection-for-online-continual-learning.html new file mode 100644 index 00000000..1b28906a --- /dev/null +++ b/_site/site/2020/02/13/Gradient-based-sample-selection-for-online-continual-learning.html @@ -0,0 +1,122 @@ +

Introduction

+ +
    +
  • +

    Use of replay buffer (and rehearsal) is a common technique for mitigating catastrophic forgetting.

    +
  • +
  • +

    The paper builds on this idea but focuses on the sample selection aspect ie, which data points to store in the replay buffer.

    +
  • +
  • +

    It formulates sample selection as a constraint minimization problem and shows that the proposed formulation is equivalent to maximizing the diversity of the samples with respect to parameter gradient.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Supervised learning tasks

    +
  • +
  • +

    Online stream of data (i.e., one or few datapoints accessed at a time).

    +
  • +
  • +

    When considering the $t^{th}$ task, the objective is: minimize the loss on the current task without increasing the loss on any of the previous tasks.

    +
  • +
  • +

    The above constraint can be rephrased as $dot(g_t, g_i) \gt 0 \forall i \in [0, t-1]$ where $g_t$ is the gradient for the $t^{th}$ task.

    +
  • +
  • +

    This is equivalent to saying that the current task gradient should not interfere negatively with the previous task gradient.

    +
  • +
+ +

Approach

+ +
    +
  • +

    In practice, the gradient constraint is enforced only over the examples in the minibatch (and not the full dataset).

    +
  • +
  • +

    The paper interprets the constraint satisfaction problem as approximating an optimal feasible region (in the gradient space) where current task performance can be improved without hurting the performance on the previous tasks.

    +
  • +
  • +

    The approximate region (of the shape of a polyhedral convex cone) is determined using only the examples from the replay buffer. Hence, the optimal region (defined for the entire dataset) would be contained within the approximate region.

    +
  • +
  • +

    The size of the approximate region can be measured in terms of the solid angle defined by the intersection between the approximate region and a unit sphere.

    +
  • +
  • +

    The paper argues that the approximate region can be made smaller by reducing the angle between each pair of gradients.

    +
  • +
  • +

    The set of points, satisfying the constraint, can be computed using the Integer Quadratic Programming (IQP).

    +
  • +
  • +

    Given that the problem setup is online learning, using IDP for every new data point is not feasible.

    +
  • +
  • +

    An in-exact, greedy alternative is suggested where a score is maintained for each example in the buffer.

    +
  • +
  • +

    When a new datapoint comes in, the score is computed and used to decide if the existing datapoint in the buffer should be replaced.

    +
  • +
  • +

    The score is the maximal cosine similarity of the current example with a random sample in the buffer.

    +
  • +
+ +

Results

+ +
    +
  • +

    Benchmarks

    + +
      +
    • +

      Disjoint MNIST

      +
    • +
    • +

      Permuted MNIST

      +
    • +
    • +

      Disjoint CIFAR10

      +
    • +
    +
  • +
  • +

    Shared head setup

    +
  • +
  • +

    Baselines for sample selection

    + +
      +
    • +

      Randomly select examples to keep in the buffer.

      +
    • +
    • +

      Perform clustering - either in the feature space or in the gradient space.

      +
    • +
    • +

      Use IQP to select the examples. This approach is not used for CIFAR10, as it is computationally costly.

      +
    • +
    • +

      It would be interesting if the paper had considered baselines like selecting samples which had the largest loss.

      +
    • +
    +
  • +
  • +

    The proposed greedy approach outperforms the other methods.

    +
  • +
  • +

    In an ablation experiment, the paper shows that the proposed approach works better than reservoir sampling (when the underlying data distribution is imbalanced).

    +
  • +
  • +

    Another experiment compares the proposed approach with Gradient Episodic Memory and iCaRL. For Permuted and Disjoint MNIST, the different methods perform quite similar though the proposed approach performs better on Disjoint CIFAR10.

    +
  • +
+ diff --git a/_site/site/2020/02/20/ELECTRA-Pre-training-Text-Encoders-as-Discriminators-Rather-Than-Generators.html b/_site/site/2020/02/20/ELECTRA-Pre-training-Text-Encoders-as-Discriminators-Rather-Than-Generators.html new file mode 100644 index 00000000..4c11a803 --- /dev/null +++ b/_site/site/2020/02/20/ELECTRA-Pre-training-Text-Encoders-as-Discriminators-Rather-Than-Generators.html @@ -0,0 +1,150 @@ +

Introduction

+ +
    +
  • +

    Masked Language Modeling (MLM) is a common technique for pre-training language-based models. The idea is to “corrupt” some tokens in the input text (around 15%) by replacing them with the [MASK] token and then training the network to reconstruct (or predict) the corrupted tokens.

    +
  • +
  • +

    Since the network learns from only about 15% of the tokens, the computational cost of training using MLM can be quite high.

    +
  • +
  • +

    The paper proposes to use a “replaced token detection” task where some tokens in the input text are replaced by other plausible tokens.

    +
  • +
  • +

    For each token in the modified text, the network has to predict if the token has been replaced or not.

    +
  • +
  • +

    The alternative token is generated using a small generator network.

    +
  • +
  • +

    Unlike the previous MLM setup, the proposed task is defined for all the input tokens, thus utilizing the training data more efficiently.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    The proposed approach is called ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately)

    +
  • +
  • +

    Two neural networks - Generator (G) and Discriminator (D) are trained.

    +
  • +
  • +

    Each network has a Transformer-based text encoder that maps a sequence of words into a sequence of vectors.

    +
  • +
  • +

    Given an input sequence x (of length N), k indices are chosen for replacing the tokens.

    +
  • +
  • +

    For each index, the generator produces a distribution over tokens. A token is sampled to replace in the original sequence. The resulting sequence is referred to as the corrupted sequence.

    +
  • +
  • +

    Given the corrupted sequence, the Discriminator predicts which token comes from the data distribution and which comes from the generator.

    +
  • +
  • +

    The generator is trained using the MLM setup, and the Discriminator is trained using the discriminative loss.

    +
  • +
  • +

    After pre-training, only the Discriminator is finetuned on the downstream tasks.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets

    + +
      +
    • +

      GLUE Benchmark

      +
    • +
    • +

      Stanford QA dataset

      +
    • +
    +
  • +
  • +

    Architecture Choices

    + +
      +
    • +

      Sharing word embeddings between generator and Discriminator helps.

      +
    • +
    • +

      Tying all the encoder weights leads to marginal improvement but forces the generator and the Discriminator to be of the same size. Hence only embeddings are shared.

      +
    • +
    • +

      Generator model is kept smaller than the discriminator model as a strong generator can make the training difficult for the Discriminator.

      +
    • +
    • +

      A two-stage training procedure was explored where only the generator is trained for n steps. Then the weights of the generator are used to initialize the Discriminator. The Discriminator is then trained for n steps while keeping the generator fixed.

      +
    • +
    • +

      This two-stage setup provides a nice curriculum for the Discriminator but does not outperform the joint training based setup.

      +
    • +
    • +

      An adversarial loss based setup is also explored but it does not work well probably because of the following reasons:

      + +
        +
      • +

        Adverserially trained generator is not as good as the MLM generator.

        +
      • +
      • +

        Adverserially trained generator produces a low entropy output distribution.

        +
      • +
      +
    • +
    +
  • +
  • +

    Results

    + +
      +
    • Both small and large ELECTRA models outperform baselines models like BERT, RoBERTa, ELMo and GPT.
    • +
    +
  • +
  • +

    Ablations

    + +
      +
    • +

      ELECTRA-15 is a variant of ELECTRA where the Discriminator is trained on only 15% of the tokens (similar to the MLM setup). This reduces performance significantly.

      +
    • +
    • +

      Replace MLM setup

      + +
        +
      • +

        Perform MLM training, but instead of using [MASK], use a toke sampled from the generator.

        +
      • +
      • +

        This improves the performance marginally.

        +
      • +
      +
    • +
    • +

      All-token MLM

      + +
        +
      • +

        In the MLM setup, replace the [MASK] token by the sampled tokens and train the MLM model to generate all the words.

        +
      • +
      • +

        In practice, the MLM model can either generate a word or copy the existing word.

        +
      • +
      • +

        This approach closes much of the gap between BERT and ELECTRA.

        +
      • +
      +
    • +
    +
  • +
  • +

    Interestingly, ELECTRA outperforms All-token MLM BERT suggesting the ELECTRA may be benefitting from parameter efficiency since it does not have to learn a distribution over all the words.

    +
  • +
diff --git a/_site/site/2020/02/27/mixup-Beyond-Empirical-Risk-Minimization.html b/_site/site/2020/02/27/mixup-Beyond-Empirical-Risk-Minimization.html new file mode 100644 index 00000000..cd6e6b69 --- /dev/null +++ b/_site/site/2020/02/27/mixup-Beyond-Empirical-Risk-Minimization.html @@ -0,0 +1,71 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a simple and dataset-agnostic data augmentation mechanism called mixup.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Consider two training examples, $(x_1, y_1)$ and $(y_1, y_2)$, where $x_1$ and $x_2$ are the datapoints and $y_1$ and $y_2$ are the labels.

    +
  • +
  • +

    New training examples of the form $(\lambda \times x_1 + (1-\lambda) \times x_2, \lambda \times y_1 + (1-\lambda) \times y_2)$ are constructured by considering the linear interpolation of the datapoints and the labels. Here $\lambda \in [0, 1]$.

    +
  • +
  • +

    $\lambda$ is sampled from a Beta distribution $Beta(\alpha, \alpha)$ where $\alpha \in (0, \infty)$.

    +
  • +
  • +

    Setting $\lambda$ to 0 or 1 eliminates the effect of mixup.

    +
  • +
  • +

    Mixup encourages the neural network to favor linear behavior between the training examples.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Supervised Learning

    + +
      +
    • +

      ImageNet for ResNet-50, ResNet-101 and ResNext-101.

      +
    • +
    • +

      CIFAR10/CIFAR100 for PreAct ResNet-18, WideResNet-28-10 and DenseNet.

      +
    • +
    • +

      Google command dataset for LeNet and VGG.

      +
    • +
    +
  • +
  • +

    In all these setups, adding mixup improves the performance of the model.

    +
  • +
  • +

    Mixup makes the model more robust to noisy labels. Moreover, mixup + dropout improves over mixup alone. This hints that mixup’s benefits are complementary to those of dropout.

    +
  • +
  • +

    Mixup makes the network more robust to adversarial examples in both white-box and black-box settings (ImageNet + Resnet101).

    +
  • +
  • +

    Mixup also stabilizes the training of GANs by acting as a regularizer for the gradient of the discriminator.

    +
  • +
+ +

Observations

+ +
    +
  • +

    Convex combination of three or more examples (with weights sampled from a Dirichlet distribution) does not provide gains over the case of two examples.

    +
  • +
  • +

    In the authors’ implementation, mixup is applied between images of the same batch (after shuffling).

    +
  • +
  • +

    Interpolating only between inputs, with the same labels, did not lead to the same kind of gains as mixup.

    +
  • +
diff --git a/_site/site/2020/03/05/What-Does-Classifying-More-Than-10,000-Image-Categories-Tell-Us.html b/_site/site/2020/03/05/What-Does-Classifying-More-Than-10,000-Image-Categories-Tell-Us.html new file mode 100644 index 00000000..ea9c212e --- /dev/null +++ b/_site/site/2020/03/05/What-Does-Classifying-More-Than-10,000-Image-Categories-Tell-Us.html @@ -0,0 +1,44 @@ +
    +
  • +

    The paper is among the first to study image classification at a large scale (10000 classes and 9 million examples).

    +
  • +
  • +

    This is a relatively old paper (2010). Some of the findings may not be relevant anymore. For instance, specific scaling challenges have been significantly overcome. Moreover, the paper uses approaches like SVM and KNN (popular at that time) and not use CNNs.

    +
  • +
  • +

    Other observations of the paper are still very relevant, and it is an educating paper. For example, since ImagetNet classes are based on WordNet, the paper looks at the effect of semantic relations (tree) of categories on the performance of the training models.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    The paper considers three variants of the ImageNet dataset - ImageNet 10K (10184 classes), ImageNet 7K (7404 classes) and ImageNet 1K (1000 classes).

    +
  • +
  • +

    They also consider smaller variants with randomly sampled classes or cases where the examples are sampled from one high-level category like vehicles.

    +
  • +
  • +

    SVM and KNN models are used with features like Bag of Words, GIST descriptors, and spatial pyramid of histograms.

    +
  • +
  • +

    Observations

    + +
      +
    • +

      A model that performs well on the smaller dataset (with fewer classes) may not perform well on the larger dataset (with more classes).

      +
    • +
    • +

      There seems to be an approximate correlation between the structure of the semantic hierarchy of the labels (obtained via WordNet) and visual confusion between the categories.

      +
    • +
    • +

      For example, consider two high-level concepts - says artifacts and animals. The model is less likely to confuse between the classes across the high-level concepts but more likely to confuse between the classes in the respective concepts.

      +
    • +
    • +

      For dense categories (categories where the classes are semantically more closely related to each other), the model tends to make more mistakes (even if the number of classes is fewer).

      +
    • +
    • +

      Accounting for the label hierarchy (in the loss function) improves the classification performance.

      +
    • +
    +
  • +
diff --git a/_site/site/2020/03/12/Competitive-Training-of-Mixtures-of-Independent-Deep-Generative-Models.html b/_site/site/2020/03/12/Competitive-Training-of-Mixtures-of-Independent-Deep-Generative-Models.html new file mode 100644 index 00000000..66d161f2 --- /dev/null +++ b/_site/site/2020/03/12/Competitive-Training-of-Mixtures-of-Independent-Deep-Generative-Models.html @@ -0,0 +1,95 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a Competitive training mechanism to train a mixture of independent generative models.

    +
  • +
  • +

    The idea is that this mixture of different models would divide the data distribution amongst themselves and specialize to their respective splits.

    +
  • +
  • +

    The training procedure is related to clustering-based methods.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Motivation

+ +
    +
  • +

    In causal modeling, a common assumption is that the data is generated by a set of independent mechanisms.

    +
  • +
  • +

    It is not known which mechanism generates which datapoint and recovering the underlying mechanisms can be modeled as learning a structural causal generative model.

    +
  • +
+ +

Setup

+ +
    +
  • +

    The paper assumes that the support of the different generators do not overlap, i.e., the underlying data distribution is factorized into non-overlapping regions.

    +
  • +
  • +

    This data factorization is learned using a set of discriminators.

    +
  • +
  • +

    If there are $k$ generators, $k$ binary partition functions $c_i, … c_k$ are used.

    +
  • +
  • +

    For a given datapoint $x$, if $c_i(x) = 1$ then $c_j(x) = 0$ for all other $j$ and $x$ is assigned to $i^{th}$ generator.

    +
  • +
  • +

    For a fixed partition function $c_j^t$ ($t$ denotes the partition function at time $t$), minimize the sum of f-divergence between the model and the data distribution (that is assigned to it). The loss formulation is an upper bound on the f-divergence of the mixture model.

    +
  • +
  • +

    In the next step, the data points are re-assigned to the generative models, based on the likelihood of each data point for each model.

    +
  • +
  • +

    The likelihood is estimated by training a discriminator that can distinguish the generated samples from the real samples.

    +
  • +
+ +

Independence as an inductive bias

+ +
    +
  • +

    The independence assumption may be too restrictive because the low-level features will be common across the distribution splits.

    +
  • +
  • +

    This “violation” can be avoided by pretraining the model using a uniform random split of the dataset. In that case, the independence assumption will hold approximately after pretraining.

    +
  • +
  • +

    Another approach could be to share some parameters across the models.

    +
  • +
  • +

    A “load balancing” approach is also used where each model always keeps training on the data points assigned to it if not enough data points are assigned to it.

    +
  • +
+ +

Comparison to VAEs and GANs

+ +
    +
  • +

    VAEs tend to be “overly inclusive” of the training distribution, i.e., they try to cover the entire support of the distribution.

    +
  • +
  • +

    GANs are prone to mode collapse where the model focuses only on one part of the distribution.

    +
  • +
  • +

    The proposed method provides a middle ground where the different generative models can focus on different parts of the distribution.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    The experiments seem to be limited. The paper shows that their proposed setup improves over the VAE and GAN baselines.

    +
  • +
  • +

    For datasets, the paper uses two-dimensional synthetic data, MNIST and CelebA

    +
  • +
diff --git a/_site/site/2020/04/09/CURL-Contrastive-Unsupervised-Representations-for-Reinforcement-Learning.html b/_site/site/2020/04/09/CURL-Contrastive-Unsupervised-Representations-for-Reinforcement-Learning.html new file mode 100644 index 00000000..49a83345 --- /dev/null +++ b/_site/site/2020/04/09/CURL-Contrastive-Unsupervised-Representations-for-Reinforcement-Learning.html @@ -0,0 +1,116 @@ +

Introduction

+ +
    +
  • +

    The paper proposes a contrastive learning approach, called CURL, for performing off-policy control from raw pixel observations (by transforming them into high dimensional features).

    +
  • +
  • +

    The idea is motivated by the application of contrastive losses in computer vision. But there are additional challenges:

    + +
      +
    • +

      The learning agent has to perform both unsupervised and reinforcement learning.

      +
    • +
    • +

      The “dataset” for unsupervised learning is not fixed and keeps changing with the policy of the agent.

      +
    • +
    +
  • +
  • +

    Unlike prior work, CURL introduces fewer changes in the underlying RL pipeline and provides more significant sample efficiency gains. For example, CURL (trained on pixels) nearly matches the performance of SAC policy (trained on state-based features).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Implementation

+ +
    +
  • +

    CURL uses instance discrimination. Deep RL algorithms commonly use a stack of temporally consecutive frames as input to the policy. In such cases, instance discrimination is applied to all the images in the stack.

    +
  • +
  • +

    For generating the positive and negative samples, random crop data augmentation is used.

    +
  • +
  • +

    Bilinear inner product is used as the similarity metric as it outperforms the commonly used normalized dot product.

    +
  • +
  • +

    For encoding the anchors and the samples, InfoNCE is used. It learns two encoders $f_q$ and $f_k$ that transform the query (base input) and the key (positive/negative samples) into latent representations. The similarity loss is applied to these latents.

    +
  • +
  • +

    Momentum contrast is used to update the parameters ($\theta_k$) of the $f_k$ network. ie $\theta_k = m \theta_k + (1-m) \theta_q$. $\theta_q$ are the parameters of the $f_q$ network and are updated in the usual way, using both the contrastive loss and the RL loss.

    +
  • +
+ +

Experiment

+ +
    +
  • +

    DMControl100K and Atart100K refer to the setups where the agent is trained for 100K steps on DMControl and Atari, respectively.

    +
  • +
  • +

    Metrics:

    + +
      +
    • +

      Sample Efficiency - How many steps does the baseline need to match CURL’s performance after 100K steps.

      +
    • +
    • +

      Performance - Ratio of episodic returns by CURL vs. the baseline after 100K steps.

      +
    • +
    +
  • +
  • +

    Baselines:

    + + +
  • +
  • +

    Results

    + +
      +
    • +

      DM Control

      + +
        +
      • +

        CURL outperforms all pixel-based RL algorithms by a significant margin for all environments on DMControl and most environments on Atari.

        +
      • +
      • +

        On DMControl, it closely matches the performance of the SAC agent trained on state-space observations.

        +
      • +
      • +

        On Atari, it achieves better median human normalizes score (HNS) than the other baselines and close to human efficiency in three environments.

        +
      • +
      +
    • +
    +
  • +
diff --git a/_site/site/2020/04/30/Supervised-Contrastive-Learning.html b/_site/site/2020/04/30/Supervised-Contrastive-Learning.html new file mode 100644 index 00000000..0c461c9a --- /dev/null +++ b/_site/site/2020/04/30/Supervised-Contrastive-Learning.html @@ -0,0 +1,110 @@ +

Introduction

+ +
    +
  • +

    The paper builds on the prior work on self-supervised contrastive learning and extends it for the supervised learning case where many positive examples are available for each anchor.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • The representation learning framework has the following components:
  • +
+ +

Data Augmentation Module

+ +
    +
  • +

    This module transforms the input example. The paper considers the following strategies:

    + +
      +
    • Random crop, followed by resizing
    • +
    • Auto Augment - A method to search for data augmentation strategies.
    • +
    • Rand Augment - Randomly sampling a sequence of data augmentations, with repetition
    • +
    • SimAugment - Sequentially apply random color distortion and Gaussian blurring, followed by probabilistic sparse image wrap.
    • +
    +
  • +
+ +

Encoder Network

+ +
* This module maps the input to a latent representation.
+
+* The same network is used to encode both the anchor and the sample.
+
+* The representation vector is normalized to lie on the unit hypersphere.
+
+ +

Projection Network

+ +
* This module maps the normalized representation to another representation, on which the contrastive loss is computed.
+
+* This network is only used for training the supervised contrastive loss.
+
+ +

Loss function

+ +
* The paper extends the standard contrastive loss formulation to handle multiple positive examples.
+
+* The main effect is that the modified loss accounts for all the same-class pairs (from within the sampled batch as well as the augmented batch).
+
+* The paper shows that the gradient (corresponding to the modified loss) causes the learning to focus more on hard examples. "Hard" cases are the ones where contrasting the anchor benefits the encoder more.
+
+* The proposed loss can also be seen as a generalization of the triplet loss.
+
+ +

Experiments

+ +
    +
  • +

    Dataset - ImageNet

    +
  • +
  • +

    Models - ResNet50, ResNet200

    +
  • +
  • +

    The network is “pretrained” using supervised contrastive loss.

    +
  • +
  • +

    After pre-training, the projection network is removed, and a linear classifier is added.

    +
  • +
  • +

    This classifier is trained with the CE loss while the rest of the network is kept fixed.

    +
  • +
+ +

Results

+ +
    +
  • +

    Using supervised contrastive loss improves over all the baseline models and data augmentation approaches.

    +
  • +
  • +

    The resulting classifier is more robust to image corruptions, as shown by the mean Corruption Error (mCE) metric on the ImageNet-C dataset.

    +
  • +
  • +

    The model is more stable to the choice oh hyperparameter values (like optimizers, data augmentation, and learning rates).

    +
  • +
+ +

Training Details

+ +
    +
  • +

    Supervised Contrastive loss is trained for 700 epochs during pre-training.

    +
  • +
  • +

    Each step is about 50% more expensive than performing CE.

    +
  • +
  • +

    The dense classifier layer can be trained in as few as ten epochs.

    +
  • +
  • +

    The temperature value is set to 0.07. Using a lower temperature is better than using a higher temperature.

    +
  • +
+ diff --git a/_site/site/2020/06/18/On-the-Difficulty-of-Warm-Starting-Neural-Network-Training.html b/_site/site/2020/06/18/On-the-Difficulty-of-Warm-Starting-Neural-Network-Training.html new file mode 100644 index 00000000..b38c47dc --- /dev/null +++ b/_site/site/2020/06/18/On-the-Difficulty-of-Warm-Starting-Neural-Network-Training.html @@ -0,0 +1,136 @@ +

Introduction

+ +
    +
  • +

    The paper considers learning scenarios where the training data is available incrementally (and not at once).

    +
  • +
  • +

    For example, in some applications, new data is available periodically (e.g., latest news articles come out every day).

    +
  • +
  • +

    The paper highlights that, in such scenarios, the conventional wisdom of “warm start” does not apply.

    +
  • +
  • +

    When new data is available, it is better to train a new model from scratch than to update the model trained on previously available data.

    +
  • +
  • +

    While the two setups lead to similar training performance, the randomly initialized model has a much better generalization performance.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Basic Batch Updating

+ +
    +
  • +

    Create two random, equally-sized partitions of the training data.

    +
  • +
  • +

    Train the model till convergence on the first half of the data. Then train the model on the entire dataset.

    +
  • +
  • +

    Models: ResNet18, MLPs, Logisitic Regression (LR)

    +
  • +
  • +

    Dataset: CIFAR10, CIFAR100, SVHN

    +
  • +
  • +

    Optimizers: Adam, SGD

    +
  • +
  • +

    Warm starting hurts generalization in all the cases.

    +
  • +
  • +

    The effect is more pronounced in the case of ResNets and MLPs (compared to LR) and harder CIFAR 10 dataset (as compared to SVHN dataset).

    +
  • +
+ +

Online Learning

+ +

Passive Online Learning

+ +
    +
  • +

    The model is given access to k new learning examples at each iteration.

    +
  • +
  • +

    A warm started model reuses the previously initialized model and trains (till convergence) on the new batch of k items.

    +
  • +
  • +

    A “randomly initialized” model is trained on all the examples (seen so far) from scratch.

    +
  • +
  • +

    Dataset: CIFAR10

    +
  • +
  • +

    Model: ResNet18

    +
  • +
  • +

    As more training data becomes available, the generalization gap between the two setups increases, and warmup starts hurting generalization.

    +
  • +
+ +

Active Online Learning

+ +
    +
  • +

    In this setup, the learner is trained to sample k new examples to add to the training dataset (using margin-based sampling).

    +
  • +
  • +

    Like the previous setup, warmup strategy still hurts generalization.

    +
  • +
+ +

Transfer Learning

+ +
    +
  • +

    Train a Resnet18 model on the CIFAR10 dataset and use this model to warm start training on the SVHN dataset.

    +
  • +
  • +

    When a small percentage of the SVHN dataset is used, the setup resembles pretraining / transfer learning and performs better than training from scratch.

    +
  • +
  • +

    As the percentage of the SVHN dataset increases, the warmup approach starts underperforming.

    +
  • +
+ +

Overcoming warm start problem

+ +
    +
  • +

    ResNet18 model on CIFAR10 dataset

    +
  • +
  • +

    When performing a hyper-parameter sweep over the learning rate and batch size, it is possible to train warm start models to reach the same generalization performance as training from scratch.

    +
  • +
  • +

    Though, in that case, there are no computational savings as the warm-started models take about the same time (to converge) as the randomly initialized model.

    +
  • +
  • +

    The increased training time indicates that the warm started model probably needs to forget the knowledge from previous training rounds.

    +
  • +
  • +

    Warm start Resnet models, that generalize well, have a low correlation to their initialization stage (measured via Pearson correlation coefficient between the model weights).

    +
  • +
  • +

    Generalization is damaged even when using a model trained on incomplete data for only a few epochs.

    +
  • +
  • +

    For warm start models, the gradient (corresponding to the “new” data) is higher than that for randomly initialized models. This hints that regularisation may help to close the generalization gap. But in practice, regularization helps both the warmup and randomly initialized model.

    +
  • +
  • +

    Warm starting only a few layers also does not close the gap.

    +
  • +
  • +

    Adding some noise to the warm started model (with the motivation of having a partially random initialization) does help somewhat but also increases the training time.

    +
  • +
  • +

    Motivating the problem as an instance of catastrophic forgetting, the authors use the EWC algorithm but report that using EWC hurts model performance.

    +
  • +
  • +

    The paper does not propose a solution to the problem but provides a thorough analysis of the problem setup, which is quite useful for understanding the phenomenon itself.

    +
  • +
diff --git a/_site/site/2020/06/25/Network-Randomization-A-Simple-Technique-for-Generalization-in-Deep-Reinforcement-Learning.html b/_site/site/2020/06/25/Network-Randomization-A-Simple-Technique-for-Generalization-in-Deep-Reinforcement-Learning.html new file mode 100644 index 00000000..a63fdaa9 --- /dev/null +++ b/_site/site/2020/06/25/Network-Randomization-A-Simple-Technique-for-Generalization-in-Deep-Reinforcement-Learning.html @@ -0,0 +1,78 @@ +

Introduction

+ +
    +
  • +

    The paper proposed a Technique for improving the generalization ability of RL agents when evaluated on an unseen environment (which is similar to the training environment).

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Link to the code

    +
  • +
+ +

Approach

+ +
    +
  • +

    The key idea is to learn features that are invariant across environments by using a randomized CNN (f) that randomly perturbs the inputs.

    +
  • +
  • +

    The policy is trained using the randomized observations obtained using f.

    +
  • +
  • +

    Invariant features are learned using a feature matching (FM) loss that matches the feature representation of the original and randomized observations.

    +
  • +
  • +

    The random network’s parameters are initialized as $\alpha I + (1 - \alpha) N(0, \sqrt\frac{2}{n_{in} + n_{out}})$ where $\alpha \in [0, 1]$, $N$ denotes the Gaussian Distribution and $n_{in}, n_{out}$ denote the number of input and output channels respectively.

    +
  • +
  • +

    Xavier Normal distribution is used for randomization to maintain the variance between the input and the randomized input.

    +
  • +
  • +

    f is randomized per iteration.

    +
  • +
  • +

    During inference, the expected action is computed by approximating over M samples (i.e., randomizing the input M times).

    +
  • +
+ +

Environments

+ +
    +
  • +

    2D CoinRun, 3D DeepMind Lab, 3D Robotics Control Task

    +
  • +
  • +

    The evaluation environments consist of different styles of backgrounds, objects, and floors.

    +
  • +
+ +

Baselines

+ +
    +
  • +

    Regularization methods: Dropout, L2 regularization, Batch Normalization

    +
  • +
  • +

    Dataset Augmentation methods: Cutout, Gray out, Inversion, Color Jitter

    +
  • +
+ +

Results

+ +
    +
  • +

    On CoinRun, the proposed approaches significantly outperforms the other baselines during evaluation. The performance improvement saturates around 10 M samples.

    +
  • +
  • +

    Cycle consistency is used to measure the similarity between two trajectories. The proposed method improves the cycle consistency as compared to the vanilla PPO baseline. It also produces sharper activation maps in the evaluation environments.

    +
  • +
  • +

    For the large-scale experiments, when evaluated on 500 levels of CoinRun, the proposed method improves the success rates from 39.8% to 58.7%.

    +
  • +
  • +

    On DeepMind Lab and Surreal robotics control tasks, the proposed method leads to agents that generalize better on the unseen environments (during evaluation).

    +
  • +
diff --git a/_site/site/2020/07/02/When-to-use-parametric-models-in-reinforcement-learning.html b/_site/site/2020/07/02/When-to-use-parametric-models-in-reinforcement-learning.html new file mode 100644 index 00000000..60fae34b --- /dev/null +++ b/_site/site/2020/07/02/When-to-use-parametric-models-in-reinforcement-learning.html @@ -0,0 +1,106 @@ +

Introduction

+ +
    +
  • +

    The paper compares replay-based approaches with model-based approaches in Reinforcement Learning (RL).

    +
  • +
  • +

    It hypothesizes that if the parametric model is only used for generation transitions for the update rule, then under certain conditions, replay-based approaches will be as good as model-based approaches.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Terminology

+ +
    +
  • +

    Planning: Any algorithm that uses additional computations (but not additional experience) to improve its performance.

    +
  • +
  • +

    Learning: Any algorithm that uses additional experience to improve its performance.

    +
  • +
  • +

    In some cases, a replay buffer can be seen as a model. For example, querying using state-action pair (from the replay buffer) is similar to querying the (expected) next-state and reward from a model. In general, the model will be more flexible as any arbitrary state-action pair can be used for querying.

    +
  • +
+ +

Computation Properties

+ +
    +
  • +

    Parametric models require more computation than sampling from a replay buffer. In contrast, the cost of maintaining a replay buffer scales linearly with their capacity.

    +
  • +
  • +

    Parametric models are useful for planning multiple-steps into the future while it is much harder to do so with a replay buffer (even more so with pixel observations).

    +
  • +
  • +

    An imperfect model maybe be more suitable for selecting actions (instead of updating the policy) because the chosen action, when executed in the environment, will lead to transitions that would improve the model.

    +
  • +
  • +

    When planning with an imperfect model, it is better to plan backward, as the update is applied on an imaginary state (which would not be encountered if the model is poor).

    +
  • +
  • +

    If the model is accurate, forward and backward planning is equivalent. This distinction between forward and backward updates does not apply to replay buffers.

    +
  • +
+ +

Failure to learn

+ +
    +
  • +

    When using a replay buffer and (i) uniformly replaying transitions, (ii) from a buffer containing only full episodes, and (iii) using TD updates, then the algorithm is stable.

    +
  • +
  • +

    When using a replay buffer and (i) uniformly replaying transitions, (ii) generating transitions using a model, and (iii) using TD updates, then the algorithm can diverge.

    +
  • +
  • +

    This case can be fixed by:

    + +
      +
    • +

      Repeatedly interating over the model and sampling transitions to and from the state model generates (not a satisfactory solution).

      +
    • +
    • +

      Using multiple-step returns (this can increase the variance).

      +
    • +
    • +

      Use algorithms specifically for stable off-policy learning (not a definitive solution).

      +
    • +
    +
  • +
+ +

Model-based algorithms at scale

+ +
    +
  • +

    The paper compares against SimPLe (model-based) with Rainbow DQN (replay-based).

    +
  • +
  • +

    The paper shows that when using a similar number of real interactions, Rainbow DQN needs fewer replay samples than model samples in SimPLe, making it more efficient (computation-wise).

    +
  • +
  • Changes to Rainbow DQN: +
      +
    • Increase number of steps, for bootstrapping, from 3 to 20.
    • +
    • Reduce the number of steps, before sampling starts from the replay buffer, from 20K to 1600.
    • +
    +
  • +
  • With these changes, Rainbow DQN outperforms SimPLe in 17 out of 26 games.
  • +
+ +

Conclusion

+ +
    +
  • +

    When using a parametric model in a replay-like setting (sampling observed states from the past), model-based learning can be unstable (in theory). Using a replay buffer is likely a better strategy under the state sampling distribution.

    +
  • +
  • +

    Parametric models are likely more useful when:

    +
      +
    • planning backward for credit assignment - even if the model is in-accurate, backward planning will only update fictional states.
    • +
    • planning forward for behavior - the resulting plan is only used to collect real experience in the environment (and not directly update the policy).
    • +
    +
  • +
diff --git a/_site/site/2020/07/09/Decentralized-Reinforcement-Learning-Global-Decision-Making-via-Local-Economic-Transactions.html b/_site/site/2020/07/09/Decentralized-Reinforcement-Learning-Global-Decision-Making-via-Local-Economic-Transactions.html new file mode 100644 index 00000000..78bb1586 --- /dev/null +++ b/_site/site/2020/07/09/Decentralized-Reinforcement-Learning-Global-Decision-Making-via-Local-Economic-Transactions.html @@ -0,0 +1,134 @@ +

Introduction

+ +
    +
  • +

    The paper explores the connections between the concepts of a single agent vs. society of agents.

    +
  • +
  • +

    A society of agents can be modeled as a single agent while a single agent can be modeled as a society of components (or sub-agents).

    +
  • +
  • +

    The paper focuses on mechanisms for training a society of self-interested agents to solve a given task – as if the system was a single task.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Contributions

+ +
    +
  • +

    Societal-decision making framework relates the local optimization problem of a single agent with the global optimization problem of a society of agents.

    +
  • +
  • +

    Cloned Vickrey Society is proposed as a mechanism to guarantee that an agent’s dominant strategy equilibrium coincides with the group’s optimal policy.

    +
  • +
  • +

    A class of decentralized RL algorithms that optimize the MDP object of the society as a whole, as a consequence of individual agents optimizing their objectives.

    +
  • +
  • +

    Empirical evaluation of Cloned Vickrey Society using any implementation called Credit Conserving Vickery.

    +
  • +
+ +

Terminology

+ +
    +
  • +

    Environment - a tuple that specifies an input space, an output space, and parameters for determining an objective.

    + +
      +
    • A standard RL setup can be mapped to environment by mapping state space to input space, action space to output space and reward function, transition function, and discount factors to the parameters specifying the objective.
    • +
    +
  • +
  • +

    Agent - a function that maps input space to output space.

    +
  • +
  • +

    Objective - a functional that maps an agent to a real number.

    +
  • +
  • +

    In auction environments, the input space is a single auction item (say s), and the output space is bidding space B.

    +
  • +
  • +

    There are N agents who compete by bidding for an item s using their bidding policy.

    +
  • +
  • +

    $b$ is a vector of bids produced by the agents.

    +
  • +
  • +

    $v_s$ is a vector of agent’s valuations of item s.

    +
  • +
  • +

    The $i^{th}$ agent’s utility is given as $v_s^i \times X^i(b) - P^i(b)$. Here, $X^i(b)$ is the portion of $s$ allocated to $i^{th}$ agent and $P^i(b)$ is the price that $i^{th}$ agent is willing to pay.

    +
  • +
+ +

Design Choices

+ +
    +
  • +

    Each agent is independently maximizing its utility.

    +
  • +
  • +

    In certain conditions (i.e., if the auction is dominant strategy incentive compatible), it is optimal for each agent to bid its valuation.

    +
  • +
  • +

    These conditions are satisfied by the Vickery auction where $P^i(b)$ is set to be the second-highest bid and $X^i(b) = 1$ if the $i^{th}$ agent wins (and 0 otherwise).

    +
  • +
  • +

    A society is a set of agents where each agent is a tuple of bidding policy $\psi$ and a transformation function.

    +
  • +
  • +

    The environment is modeled at two levels - (i) global environment (referred to as the global MDP) and local environment (referred to as local auction).

    +
  • +
  • +

    Each state $s$ in the global MDP is an auction item in a different auction. The winner (of local auction at $s$) transforms $s$ into some other state $s’$.

    +
  • +
  • +

    If these transformations are modeled as actions, then the proposed framework can be interpreted as a decentralized reinforcement learning framework.

    +
  • +
  • +

    Motivated by the design of market economy (where economic transactions determine wealth distribution), the paper proposes that, for an agent, the valuation of winning an auction is the revenue it can receive in the auction at the next timestep by selling the transformed state.

    +
  • +
  • +

    A global MDP that adhere to this design is referred to as the Market MDP.

    +
  • +
  • +

    There is a catch in the design of the market MDP - the winning agent, at time $t-1$, gets the amount that the highest bidder is willing to pay at time $t+1$. But the winner at time $t+1$ only paid the second-highest bid. Hence, the credit is not conserved.

    +
  • +
  • +

    This inconsistency can be fixed by introducing “duplicate” (or cloned) agents, and the society is called the Cloned Vickery Society.

    +
  • +
  • +

    The Cloned Vickrey Auction mechanism is compared against alternate bidding mechanisms like first price auction (where winner pays the bid they proposed), solitary version of Vickrey auction (no cloning), and Environment Reward where only environment reward is used, and there is no price term.

    +
  • +
  • +

    It is empirically shown that Cloned Vickrey Auction learns bids that are most close to their actual valuations. Moreover, solitary version leads bids which are more spread out than the ones learned by cloned version. This highlights the importance of competitive pressure to learn bid values.

    +
  • +
  • +

    Three different implementations of Cloned Vickrey Auction are considered:

    + +
      +
    • +

      Bucket Brigade (BB) - winner at timestep $t$ receives the highest bid at time step $t+1$, and the subsequent winner pays the highest bid. This case satisfies Credit Conservation and Bellman Optimality.

      +
    • +
    • +

      Vickrey (V) - winner at timestep $t$ receives the highest bid at time step $t+1$, and the subsequent winner pays the second-highest bid. This case satisfies Truthful Dominant Strategy and Bellman Optimality.

      +
    • +
    • +

      Credit Conserving Vickrey (CCV) - winner at timestep $t$ receives the second-highest bid at time step $t+1$, and the subsequent winner pays the second-highest bid. This case satisfies Truthful Dominant Strategy and Credit Conservation.

      +
    • +
    +
  • +
  • +

    CCV implementation provides bid values closest to the optimal Q-values.

    +
  • +
  • +

    In one experiment, the paper explores the use of the proposed approach for selecting between sub-policies. It shows that CVV is more sample efficient for pretraining sub-policies and adapting them to transfer tasks.

    +
  • +
  • +

    In another experiment, the task is to transform MNIST images by composing two out of 6 affine transformations. The transformed images are fed to a pretrained classifier that predicts a label. The agent gets a reward of 1 if the classifier makes correct prediction and 0 otherwise. CCV implementation obtains a mean reward of 0.933, thus highlighting the effectiveness of the CCV model.

    +
  • +
diff --git a/_site/site/2020/07/16/Averaging-Weights-leads-to-Wider-Optima-and-Better-Generalization.html b/_site/site/2020/07/16/Averaging-Weights-leads-to-Wider-Optima-and-Better-Generalization.html new file mode 100644 index 00000000..371b9cc9 --- /dev/null +++ b/_site/site/2020/07/16/Averaging-Weights-leads-to-Wider-Optima-and-Better-Generalization.html @@ -0,0 +1,91 @@ +

Introduction

+ +
    +
  • +

    The paper proposes Stochastic Weight Averaging (SWA) procedure for improving the generalization performance of models trained with SGD (with cyclic or constant learning rate).

    +
  • +
  • +

    Specifically, the model is checkpointed at several points along the training trajectory, and these checkpoints are averaged (in the parameter space) to obtain a single model.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Idea

+ +
    +
  • +

    “Stochastic” in the name refers to the idea that with cyclical or constant learning rate, SGD proposals are approximately sampled from a neural network’s loss surface and are hence stochastic.

    +
  • +
  • +

    SWA uses a learning rate schedule that allows exploration in the weight space.

    +
  • +
  • +

    SGD with cyclical and constant learning rates explore points (model instances) at the periphery of high-performing networks.

    +
  • +
  • +

    With different initializations, SGD will find different points (of low training loss) on this boundary, but will not move inside it.

    +
  • +
  • +

    Averaging the points provide a mechanism to move inside this periphery.

    +
  • +
  • +

    The train and the test error surfaces, while being similar, are not perfectly aligned. Hence, averaging several models (along the optimization trajectory) could lead to a more robust model.

    +
  • +
+ +

Algorithm

+ +
    +
  • +

    Given a model $w$ and some training budget $B$, train the model in the conventional way for approx 75% of the budget.

    +
  • +
  • +

    Starting from that point, continue training with the remaining budget, with a constant or cyclical learning rate.

    +
  • +
  • +

    For fixed learning rate, checkpoint models at each epoch. For cyclical learning rate, checkpoint the model at the lowest learning rate in the cycle.

    +
  • +
  • +

    Average all the models to get the SWA model.

    +
  • +
  • +

    If the model has Batch Normalization layers, run an additional pass to compute the SWA model’s running mean and standard deviation.

    +
  • +
  • +

    The computational and space complexity of computing the SWA model is relatively low.

    +
  • +
  • +

    The paper highlights the ensembling like the effect of SWA by showing that if the model checkpoints ($w_i$) are generated by training with Fast Geometric Ensembling (FGE), the difference between averaging the weights and averaging the predictions is of the order $O(\Delta)$ where $\Delta = max ||w_i - w_{SA}||$.

    +
  • +
  • +

    Note that SWA does not have the overhead of an extra-forward pass during inference.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets: CIFAR10, CIFAR100, ImageNet

    +
  • +
  • +

    Models: VGG16, WideResNet, 164-layer preactivation ResNet, ShakeShake, Pyramid Net.

    +
  • +
  • +

    Baselines: Conventional SGD, Exponentially decaying average with SGD and FGE.

    +
  • +
  • +

    In all the CIFAR experiments, SWA consistently outperforms SGD in one budget and consistently improves with training.

    +
  • +
  • +

    SWA also achieves performance comparable to FGE, despite FGE being an ensemble method.

    +
  • +
  • +

    On ImageNet, SWA is run on a pre-trained model, and it improves performance in all the cases.

    +
  • +
  • +

    An ablation experiment (on CIFAR-100) shows that it is possible to train a network (with SWA) using a fixed learning rate. In that setup, using SWA improves performance by 16%.

    +
  • +
diff --git a/_site/site/2020/07/23/TASKNORM-Rethinking-Batch-Normalization-for-Meta-Learning.html b/_site/site/2020/07/23/TASKNORM-Rethinking-Batch-Normalization-for-Meta-Learning.html new file mode 100644 index 00000000..4c229152 --- /dev/null +++ b/_site/site/2020/07/23/TASKNORM-Rethinking-Batch-Normalization-for-Meta-Learning.html @@ -0,0 +1,168 @@ +

Introduction

+ +
    +
  • +

    Meta-learning techniques are shown to benefit from the use of deep neural networks.

    +
  • +
  • +

    BatchNorm is a commonly used component when training deep networks, especially for vision tasks.

    +
  • +
  • +

    However, BatchNorm and meta-learning make contradictory assumptions, and their combination may not work well in practice.

    +
  • +
  • +

    The paper proposes TaskNorm, a normalization method that is designed explicitly for meta-learning.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Standard meta-learning setup with $k$ tasks, each task with its own context and target set.

    +
  • +
  • +

    Two sets of parameters are considered during meta-learning - (i) global parameters, and (ii) task-specific parameters.

    +
  • +
  • +

    Meta-learning setup can be viewed as an inference task, where the task-specific parameters are inferred using a context set and some additional (trainable) parameters.

    +
  • +
  • +

    Normalization layers are commonly used to accelerate the training of neural networks. The general approach is to use normalization moments (statistics) along with some learned parameters.

    +
  • +
  • +

    BatchNorm is a well-known and widely used normalization approach. It relies on the implicit assumption that the dataset comprises of iid samples from some underlying distribution.

    +
  • +
  • +

    However, in meta-learning, data points are assumed to be iid only within a specific task.

    +
  • +
  • +

    This leaves open the question of what moments to use during meta-train and meta-test time.

    +
  • +
+ +

Variants of BatchNorm

+ +

Conventional BatchNorm (CBN)

+ +
    +
  • +

    Compute moments at meta train time and use during meta test time.

    +
  • +
  • +

    This is equivalent to lumping the moments with the global parameters. I.e., the running moments are shared globally, while the data is iid only locally.

    +
  • +
  • +

    Using CBN with MAML leads to poor results.

    +
  • +
  • +

    Moreover, meta-learning setup can some times require the use of a very small batch size. (e.g., 1-shot learning) In those cases, the computed statistics are likely to be inaccurate.

    +
  • +
+ +

Transductive BatchNorm (TBN)

+ +
    +
  • +

    Use context/target set statistics at both meta-train and meta-test time.

    +
  • +
  • +

    This is the default BatchNorm mode used in MAML.

    +
  • +
+ +

Instance-based normalization

+ +
    +
  • +

    Moments are computed separately for each instance.

    +
  • +
  • +

    This mode corresponds to treating the statistics as local at the observation level.

    +
  • +
  • +

    These methods provide only limited improvement in performance, and can sometimes have a large overhead.

    +
  • +
+ +

Task Normalization (Proposed)

+ +
    +
  • +

    The normalization statistics are local at the task level, and statistics for a given data point should only depend on the context set’s data point. It should not depend on the other elements of the target set.

    +
  • +
  • +

    Meta-Batch Normalisation (METABN) is a precursor to TaskNorm where the context set alone is used to compute the normalization statistics for both the context and the target set (during both meta-test and meta-train time).

    +
  • +
  • +

    METABN does not perform well when used with small context sets.

    +
  • +
  • +

    TaskNorm overcomes this limitation by using a set of non-transductive, secondary moments (computed from the input being normalized).

    +
  • +
  • +

    When the context is small, using additional moments will help to improve the moment estimates.

    +
  • +
  • +

    In the general case, a trainable blending factor, $\alpha$, is used to combine the two sets of moments.

    +
  • +
  • +

    While the computational cost of TaskNorm is slightly more than CBN, it converges faster than CBN in practice.

    +
  • +
  • +

    Normalization mechanism in Reptile can be interpreted as a particular case of TaskNorm.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Small scale few-shot classification experiments

    + +
      +
    • +

      Omniglot and imin ImageNet dataset

      +
    • +
    • +

      First order MAML, with different kinds of normalization schemes.

      +
    • +
    • +

      Transductive BatchNorm performs the best.

      +
    • +
    • +

      Among non-transductive approaches, TaskNorm using Instance Normalisation augmentation performs the best.

      +
    • +
    • +

      Similar trend holds for the speed of convergence as well.

      +
    • +
    +
  • +
  • +

    Large scale few-shot classification experiments

    + +
      +
    • +

      MetaDataset dataset

      +
    • +
    • +

      CNAPs model

      +
    • +
    • +

      The context set’s size varies across tasks in this setup and can be as small as 5.

      +
    • +
    • +

      TaskNorm with Instance Normalisation ranks first in 10 (out of 13) datasets and is also the fastest to train.

      +
    • +
    • +

      While Instance-based methods (Instance Normalisation and Layer Normalisation) are the slowest to converge, they still outperform the running average based methods (conventional BatchNorm).

      +
    • +
    • +

      The results demonstrate that designing meta-learning specific normalization methods can significantly improve performance and that Transductive BatchNorm may not always be the optimal choice.

      +
    • +
    +
  • +
diff --git a/_site/site/2020/07/30/GradNorm-Gradient-Normalization-for-Adaptive-Loss-Balancing-in-Deep-Multitask-Networks.html b/_site/site/2020/07/30/GradNorm-Gradient-Normalization-for-Adaptive-Loss-Balancing-in-Deep-Multitask-Networks.html new file mode 100644 index 00000000..e6ad85c2 --- /dev/null +++ b/_site/site/2020/07/30/GradNorm-Gradient-Normalization-for-Adaptive-Loss-Balancing-in-Deep-Multitask-Networks.html @@ -0,0 +1,99 @@ +

Introduction

+ +
    +
  • +

    The paper proposes GradNorm, a gradient normalization algorithm that improves multi-task training by dynamically tuning the magnitude of gradients corresponding to different tasks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Motivation

+ +
    +
  • +

    During multi-task training, some tasks can dominate the training, at the expense of others.

    +
  • +
  • +

    It is common to define the multi-task loss as a linearly weighted combination of the individual task losses.

    +
  • +
  • +

    The paper proposes two changes to this setup:

    + +
      +
    • +

      Adapt weight-coefficients, assigned to each loss term, at each training step.

      +
    • +
    • +

      Directly modify the gradient magnitudes, corresponding to different tasks, so that all the tasks are learning at similar rates.

      +
    • +
    +
  • +
  • +

    Proposed GradNorm algorithm is similar to BatchNorm, but it performs normalization across tasks, not data batches.

    +
  • +
+ +

Algorithm

+ +
    +
  • +

    Gradient norm at timestep $t$, for the $i^{th}$ task, is computed as the product between average gradient norm (across all tasks at timestep $t$) and $r_i(t) ^ {\alpha}$.

    +
  • +
  • +

    $r_i$ is the relative inverse training rate of task $i$. It is defined as the ratio between the loss ratio of task $i$ and the average loss ratio (across all the tasks).

    +
  • +
  • +

    $\alpha$ is a hyperparameter.

    +
  • +
  • +

    This computed per-task gradient norm is treated as the target value for actual gradient norms.

    +
  • +
  • +

    An additional $L_1$ loss is incorporated between the actual and the target gradient norms, summed over all the tasks, and optimizes the weight-coefficients only.

    +
  • +
  • +

    After every step, the weight-coefficients are renormalized to decouple the gradient normalization from the global learning rate.

    +
  • +
  • +

    Note that all the gradient norm computations are performed only for the layers on which GradNorm is applied. Generally, GradNorm is used with only the last shared layer of weights (to save on computational costs).

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Two variants of NYUv2 dataset – NYUv2+seg (small dataset) and NYUv2+kpts (big dataset).

    +
  • +
  • +

    Both regression and classification setups were used.

    +
  • +
  • +

    Models:

    + +
      +
    • +

      SegNet with a symmetric VGG16 encoder/decoder

      +
    • +
    • +

      FCN with modified ResNet-50 as the encoder and shallow ResNet as the decoder.

      +
    • +
    +
  • +
  • +

    Standard pixel-wise losses for each task.

    +
  • +
+ +

Results

+ +
    +
  • +

    GradNorm with $\alpha=1.5$ outperforms the equal-weight baseline and either surpasses or matches the best performance of single networks for each task.

    +
  • +
  • +

    Almost any value of 0 < $\alpha$ < 3 improves the network’s performance over an equal weight baseline.

    +
  • +
diff --git a/_site/site/2020/08/06/Gradient-Surgery-for-Multi-Task-Learning.html b/_site/site/2020/08/06/Gradient-Surgery-for-Multi-Task-Learning.html new file mode 100644 index 00000000..cb090973 --- /dev/null +++ b/_site/site/2020/08/06/Gradient-Surgery-for-Multi-Task-Learning.html @@ -0,0 +1,113 @@ +
    +
  • +

    The paper hypothesizes that main optimization challenges in multi-task learning arise because of negative interference between different tasks’ gradients.

    +
  • +
  • +

    It hypothesizes that negative interference happens when:

    + +
      +
    • +

      The gradients are conflicting (i.e., have a negative cosine similarity).

      +
    • +
    • +

      The gradients coincide with high positive curvature.

      +
    • +
    • +

      The difference in gradient magnitude is quite large.

      +
    • +
    +
  • +
  • +

    The paper proses to work around this problem by performing “gradient surgery.”

    +
  • +
  • +

    If two gradients are conflicting, modify the gradients by projecting each onto the other’s normal plane.

    +
  • +
  • +

    This modification is equivalent to removing the conflicting component of the gradient.

    +
  • +
  • +

    This approach is referred to as projecting conflicting gradients (PCGrad).

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Theoretical Analysis

    + +
      +
    • +

      The paper proves the local conditions under which PCGrad improves multi-task gradient descent in the two-task setup.

      +
    • +
    • +

      The conditions are:

      + +
        +
      • +

        Angle between the task gradients is not too small.

        +
      • +
      • +

        Difference in the magnitude of the gradients is sufficiently large.

        +
      • +
      • +

        Curvature of the multi-task gradient is large.

        +
      • +
      • +

        Large enough learning rate.

        +
      • +
      +
    • +
    +
  • +
  • +

    Experimental Setup

    + +
      +
    • +

      Multi-task supervised learning

      + +
        +
      • +

        MutliMNIST, Multi-task CIFAR100, NYUv2.

        +
      • +
      • +

        For Multi-task CIFAR-100, PCGrad is used with the shared parameters of the routing networks.

        +
      • +
      • +

        For NYUv2, PCGrad is combined with MTAN.

        +
      • +
      • +

        In all the cases, using PCGrad improves the performance.

        +
      • +
      +
    • +
    • +

      Multi-task Reinforcement Learning

      + +
        +
      • +

        Meta-World Benchmark

        +
      • +
      • +

        PCGrad + SAC outperforms all other baselines.

        +
      • +
      • +

        In the context of SAC, the paper suggests learning temperature $\alpha$ on a per-task basis.

        +
      • +
      +
    • +
    • +

      Goal-conditioned Reinforcement Learning

      + +
        +
      • +

        Goal-conditioned robotic pushing task with a Sawyer robot.

        +
      • +
      • +

        PCGrad + SAC outperforms vanilla SAC.

        +
      • +
      +
    • +
    +
  • +
diff --git a/_site/site/2020/08/14/Outrageously-Large-Neural-Networks-The-Sparsely-Gated-Mixture-of-Experts-Layer.html b/_site/site/2020/08/14/Outrageously-Large-Neural-Networks-The-Sparsely-Gated-Mixture-of-Experts-Layer.html new file mode 100644 index 00000000..447137c2 --- /dev/null +++ b/_site/site/2020/08/14/Outrageously-Large-Neural-Networks-The-Sparsely-Gated-Mixture-of-Experts-Layer.html @@ -0,0 +1,143 @@ +

Introduction

+ +
    +
  • +

    Conditional computation is a technique to increase a model’s capacity (without a proportional increase in computation) by activating parts of the network on a per example basis.

    +
  • +
  • +

    The paper describes (and address) the computational and algorithmic challenges in conditional computation. It introduces a sparsely-gated Mixture-of-Experts layer (MoE) with 1000s of feed-forward sub-networks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Practical Challenges

+ +
    +
  • +

    GPUs are fast at matrix arithmetic but slow at branching.

    +
  • +
  • +

    Large batch sizes amortizes the cost of updates. Conditional computation reduces the effective batch size for different components of the model.

    +
  • +
  • +

    Network bandwidth can be a bottleneck with the network demand overshadowing the computational demand.

    +
  • +
  • +

    Additional losses may be needed to achieve the desired level of sparsity.

    +
  • +
  • +

    Conditional computation is most useful for large datasets.

    +
  • +
+ +

Architecture

+ +
    +
  • +

    n Expert Networks - $E_1$, …, $E_n$.

    +
  • +
  • +

    Gating Network $G$ to select a sparse combination of experts.

    +
  • +
  • +

    Output of the MoE module is the weighted sum of predictions of experts (weighted by the output of the gate).

    +
  • +
  • +

    If the gating network’s output is sparse, then some of the experts’ value does not have to be computed.

    +
  • +
  • +

    In theory, one could use a hierarchical mixture of experts where a mixture of experts is trained at each level.

    +
  • +
+ +

Choices for the Gating Network

+ +
    +
  • +

    Softmax Gating

    +
  • +
  • +

    Noisy top-k gating - Add tunable Gaussian noise to the output of softmax gating and retain only the top-k values. A second trainable weight matrix controls the amount of noise per component.

    +
  • +
+ +

Addressing Performance Challenge

+ +
    +
  • +

    Shrinking Batch Problem

    + +
      +
    • +

      If the MoE selects k out of n experts, the effective batch size reduces by a factor of k / n.

      +
    • +
    • +

      This reduction in batch size is accounted for by combining data parallelism (for standard layers and gasting networks) and model parallelism (for experts in MoE). Thus, with d devices, the batch size changes by a factor of (k x d ) / n.

      +
    • +
    • +

      For hierarchical MoE, the primary gating network uses data parallelism while secondary MoEs use model parallelism.

      +
    • +
    • +

      The paper considers LSTM models where the MoE is applied once the previous layer has finished. This increases the batch size (for the current MoE layer) by a factor equal to the number of unrolling timesteps.

      +
    • +
    • +

      Network Bandwith limitations can be overcome by ensuring that the ratio of computation (of each expert) to the input and output size is greater than (or equal to) the ratio of computational to network capacity.

      +
    • +
    • +

      Computational efficiency can be improved by using larger hidden layers (or more hidden layers).

      +
    • +
    +
  • +
  • +

    Balancing Expert Utilization

    + +
      +
    • +

      Importance of an expert (relative to a batch of training examples) is defined as the batchwise sum of the expert’s goal values.

      +
    • +
    • +

      An additional loss, called importance loss, is added to encourage the experts to have equal importance.

      +
    • +
    • +

      The importance loss is defined as the square of the coefficient of variation (of a set of importance values) multiplied by a (hand-tuned) scaling factor $w_{importance}$.

      +
    • +
    • +

      In practice, an additional loss called $L_{load}$ might be needed to ensure that the different experts get equal load (along with equal importance).

      +
    • +
    +
  • +
+ +

Experiments

+ +
    +
  • +

    Datasets

    + +
      +
    • +

      Billon Word Language modeling Benchmark

      +
    • +
    • +

      100 Billion word Google News Corpus

      +
    • +
    • +

      Machine Translation datasets

      + +
        +
      • +

        Single Language Pairs - WMT’14 En to Fr (36M sentence pairs) and En to De (5M sentence pairs).

        +
      • +
      • +

        Multilingual Machine Translation - large combine dataset of twelve language pairs.

        +
      • +
      +
    • +
    +
  • +
  • +

    In all the setups, the proposed MoE models achieve significantly better results than the baseline models, at a lower computational cost.

    +
  • +
diff --git a/_site/site/2020/08/24/Alpha-Net-Adaptation-with-Composition-in-Classifier-Space.html b/_site/site/2020/08/24/Alpha-Net-Adaptation-with-Composition-in-Classifier-Space.html new file mode 100644 index 00000000..b8e6ec7f --- /dev/null +++ b/_site/site/2020/08/24/Alpha-Net-Adaptation-with-Composition-in-Classifier-Space.html @@ -0,0 +1,88 @@ +

Introduction

+ +
    +
  • +

    Common transfer learning method focuses on transferring knowledge in the model feature space.

    +
  • +
  • +

    In contrast, the paper argues that the learned knowledge is more concisely captured in the “classifier space” as the classifier is fitted for all the samples for a given class, while the feature representation is specific to each sample.

    +
  • +
  • +

    Building on this intuition, the paper proposes to combine strong classifiers (trained on large datasets) with weak classifiers (trained on smaller datasets) to improve the weak classifiers’ performance.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

High-Level Idea

+ +
    +
  • +

    Given $n$ classifiers, $C_1, …, C_n$, trained with a large amount of data and a weak classifier $a$ trained for a class with few samples.

    +
  • +
  • +

    Find the nearest neighbors of $a$.

    +
  • +
  • +

    Train a new classifier by linearly combining $a$ with its nearest classifiers.

    +
  • +
  • +

    The coefficients (for linearly combining the classifiers) are learned using another classifier called as AlphaNet.

    +
  • +
  • +

    In theory, this approach can be used with any set of classifiers.

    +
  • +
+ +

Setup

+ +
    +
  • +

    A long-tailed dataset is one where some classes (referred to as the tail classes) have very few examples—for example, ImageNet-LT and Places-LT.

    +
  • +
  • +

    Split the long-tailed dataset into two splits - “base” classes with $B$ (number of) classes and “few” classes with $F$ (number of) classes.

    +
  • +
  • +

    Total number of classes $N = B + F$.

    +
  • +
  • +

    Start with a pre-trained model, with classifiers $w_j$ and biases $b_j$ for $j \in (1, N)$.

    +
  • +
  • +

    For a given target class $j$, find its top $k$ nearest neighbor classifiers and concatenate their output.

    +
  • +
  • +

    For each “few” class, learn a feedforward network that takes the concatenated representation (of classifiers) as the input and returns a vector of $k \alpha$ values.

    +
  • +
  • +

    These $\alpha$ values are interpreted as the classifier’s strength (or confidence) in its nearest neighbors.

    +
  • +
  • +

    The (normalized) alpha values are used for defining the weight and bias for the classifier for the given “few” class.

    +
  • +
  • +

    The collection of all the “few” classifiers is referred to as the AlphaNet.

    +
  • +
  • +

    The paper outlines a degenerate case, where the confidence in the prediction of all the strong classifiers goes to 0. The paper proposes to counter this case by clamping the $\alpha$ values.

    +
  • +
  • +

    The entire setup is trained end-to-end using cross-entropy loss on AlphaNet.

    +
  • +
+ +

Results

+ +
    +
  • +

    Given the proposed approach’s flexibility, it is used to combine the state-of-the-art models on ImageNet-LT, namely retraining classifiers on class-balanced samples and training models with weight normalization. The combined setup outperforms the individual models.

    +
  • +
  • +

    One interesting observation is that it is useful to include the weak classifiers, along with the strong classifiers, as AlphaNet adjusts the position of weak classifiers towards the appropriate strong classifier.

    +
  • +
  • +

    While the idea is described in the context of long-tail data distribution, the idea is useful in the general context of non-stationary data distribution. One instantiation could be lifelong class incremental learning where the model encounters new data classes during training. For some time duration (till sufficient data points are seen), the newly seen classes are the “few” classes. This approach can help with faster adaptation when the model is yet to see sufficient examples for the unseen classes.

    +
  • +
diff --git a/_site/site/2020/08/31/Deep-Reinforcement-Learning-and-the-Deadly-Triad.html b/_site/site/2020/08/31/Deep-Reinforcement-Learning-and-the-Deadly-Triad.html new file mode 100644 index 00000000..dde4f919 --- /dev/null +++ b/_site/site/2020/08/31/Deep-Reinforcement-Learning-and-the-Deadly-Triad.html @@ -0,0 +1,139 @@ +

Introduction

+ +
    +
  • +

    The paper investigates the practical impact of the deadly triad (function approximation, bootstrapping, and off-policy learning) in deep Q-networks (trained with experience replay).

    +
  • +
  • +

    The deadly triad is called so because when all the three components are combined, TD learning can diverge, and value estimates can become unbounded.

    +
  • +
  • +

    However, in practice, the component of the deadly triad has been combined successfully. An example is training DQN agents to play Atari.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    The effect of each component of the triad can be regulated with some design choices:

    + +
      +
    • +

      Bootstrapping - by controlling the number of steps before bootstrapping.

      +
    • +
    • +

      Function approximation - by controlling the size of the neural network.

      +
    • +
    • +

      Off-policy learning - by controlling how data points are sampled from the replay buffer (i.e., using different prioritization approaches)

      +
    • +
    +
  • +
  • +

    The problem is studied in two contexts: toy example and Atari 2600 games.

    +
  • +
  • +

    The paper makes several hypotheses about how different components may interact in the triad and evaluate these hypotheses by training DQN with different hyperparameters:

    + +
      +
    • +

      Number of steps before bootstrapping - 1, 3, 10

      +
    • +
    • +

      Four levels of prioritization (for sampling data from the replay buffer)

      +
    • +
    • +

      Bootstrap target - Q-learning, target Q-learning, inverse double Q-learning, and double Q-learning

      +
    • +
    • +

      Network sizes-small, medium, large and extra-large.

      +
    • +
    +
  • +
  • +

    Each experiment was run with three different seeds.

    +
  • +
  • +

    The paper formulates a series of hypotheses and designs experiments to support/reject the hypotheses.

    +
  • +
+ +

Hypothesis 1: Combining Q learning with conventional deep RL function spaces does not commonly lead to divergence

+ +
    +
  • +

    Rewards are clipped between -1 and 1, and the discount factor is set to 0.99. Hence, the maximum absolute action value is bound to smaller than 100. This upper bound is used soft-divergence in the value estimates.

    +
  • +
  • +

    The paper reports that while soft-divergence does occur, the values do not become unbounded, thus supporting the hypothesis.

    +
  • +
+ +

Hypothesis 2: There is less divergence when correcting for overestimation bias or when bootstrapping on separate networks.

+ +
    +
  • +

    One manifestation of bootstrapping on separate networks is target-Q learning. While using separate networks helps on Atari, it does not entirely solve the problem on the toy setup.

    +
  • +
  • +

    One manifestation of correcting for the overestimation bias is using double Q-learning.

    +
  • +
  • +

    In the standard form, double Q-learning benefits by bootstrapping on a separate network. To isolate the gains by using each component independently, an inverse double Q-learning update is used that does not use a separate target-network for bootstrapping.

    +
  • +
  • +

    Experimentally, Q-learning is the most unstable while target Q-learning and double Q-learning are the most stable. This observation supports the hypothesis.

    +
  • +
+ +

Hypothesis 3: Longer multi-step returns will diverge easily

+ +
    +
  • +

    This hypothesis is intuitive as the dependence on bootstrapping is reduced with multi-step returns.

    +
  • +
  • +

    Experimental results support this hypothesis.

    +
  • +
+ +

Hypothesis 4: Larger, more capacity networks will diverge less easily.

+ +
    +
  • +

    This hypothesis is based on the assumption that more flexible value function approximations may behave more like the tabular case.

    +
  • +
  • +

    In practice, smaller networks show fewer instances of instability than the larger networks.

    +
  • +
  • +

    The hypothesis is not supported by the experiments.

    +
  • +
+ +

Hypothesis 5: Stronger prioritization of updates will diverge more easily.

+ +
    +
  • This hypothesis is supported by the experiments for all the four updates.
  • +
+ +

Effect of the deadly triad on the agent’s performance

+ +
    +
  • +

    Generally, soft-divergence correlates with poor control performance.

    +
  • +
  • +

    For example, longer multi-step returns lead to fewer instances of instabilities and better performance.

    +
  • +
  • +

    The trend is more interesting in terms of network capacity. Large networks tend to diverge more but also perform the best.

    +
  • +
  • +

    While action-value estimates can grow to large values, they can recover to plausible values as training progresses.

    +
  • +
diff --git a/_site/site/2020/09/07/Revisiting-Fundamentals-of-Experience-Replay.html b/_site/site/2020/09/07/Revisiting-Fundamentals-of-Experience-Replay.html new file mode 100644 index 00000000..93564e53 --- /dev/null +++ b/_site/site/2020/09/07/Revisiting-Fundamentals-of-Experience-Replay.html @@ -0,0 +1,130 @@ +

Introduction

+ +
    +
  • +

    The paper presents an extensive study of the effects of experience replay in Q-learning based methods.

    +
  • +
  • +

    It focuses explicitly on the replay capacity and replay ratio (ratio of learning updates to experience collected).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Replay capacity is defined as the total number of transitions stored in the replay buffer.

    +
  • +
  • +

    Age of a transition (stored in the replay buffer) is defined as the number of gradient steps taken by the agent since the transition was stored.

    +
  • +
  • +

    More is the replay capacity, more will be the age of the oldest transition (also referred to as the age of the oldest policy).

    +
  • +
  • +

    More is the replay capacity, more will be the degree of “off-policyness” of the transitions in the buffer (with everything else held constant).

    +
  • +
  • +

    Replay ratio is the number of gradient updates per environment transition. This ratio can be used as a proxy for how often the agent uses old data (vs. collecting new data) and is related to off-policyness.

    +
  • +
  • +

    In DQN paper, the replay ratio is set to be 0.25.

    +
  • +
  • +

    For experiments, a subset (of 14 games) is selected from Atari ALE (Arcade Learning Environment) with sticky actions.

    +
  • +
  • +

    Each experiment is repeated with three seeds.

    +
  • +
  • +

    Rainbow is used as the base algorithm.

    +
  • +
  • +

    Total number of gradient updates and batch size (per gradient update) are fixed for all the experiments.

    +
  • +
  • +

    Rainbow used replay capacity of 1M and oldest policy of age 250K.

    +
  • +
  • +

    In experiments, replay capacity varies from 0.1M to 10M ( 5 values), and the age of the oldest policy varies from 25K to 25M (4 values).

    +
  • +
+ +

Observations

+ +
    +
  • +

    With the age of the oldest policy fixed, performance improves with higher replay capacity, probably due to increased state-action coverage.

    +
  • +
  • +

    With fixed replay capacity, reducing the oldest policy’s age improves performance, probably due to the reduced off-policyness of the data in the replay buffer.

    +
  • +
  • +

    However, in some specific instances (with sparse reward, hard exploration setup), performance can drop when reducing the oldest policy’s age.

    +
  • +
  • +

    Increasing replay capacity, while keeping the replay ratio fixed, provides varying improvements and depends on the particular values of replacy capacity and replay ratio.

    +
  • +
  • +

    The paper reports the effect of these choices for DQN as well.

    +
  • +
  • +

    Unlike Rainbow, DQN does not improve with larger replay capacity, irrespective of whether the replay ratio or age of the oldest policy is kept fixed.

    +
  • +
  • +

    Given that the Rainbow agent is a DQN agent with additional components, the paper explores which of these components leads to an improvement in Rainbow’s performance as replay capacity increases.

    +
  • +
+ +

Additive Experiments

+ +
    +
  • +

    Four new DQN variants are created by adding each of Rainbow’s four components to the base DQN agent.

    +
  • +
  • +

    DQN with n-step returns is the only variant that benefits by increased replay capacity.

    +
  • +
  • +

    The usefulness of n-step returns is further validated by verifying that Rainbow agent without n-step returns does not benefit by increased replay capacity. While Rainbow agent without any other component benefits by the increased capacity.

    +
  • +
  • +

    Prioritized Experience Replay does not significantly affect the performance with increased replay capacity.

    +
  • +
  • +

    The observation that n-step returns are critical for taking advantage of larger replay sizes is surprising because the uncorrected n-step returns are theoretically not suitable for off-policy learning.

    +
  • +
  • +

    The paper tests the limits of increasing replay capacity (with n-step returns) by performing experiments in the offline-RL setup, the agent collects a dataset of about 200M frames. These frames are used to train another agent.

    +
  • +
  • +

    Even in this extreme setup, n-step returns improve the learning agent’s performance.

    +
  • +
+ +

Why do n-step returns help?

+ +
    +
  • +

    Hypothesis 1: n-step returns help to counter the increased off-policyness produced by a larger replay buffer.

    + +
      +
    • This hypothesis does not seem to hold as keeping the oldest policy fixed or using the same contrastive factor as an n-step update does not improve the 1-step update’s performance.
    • +
    +
  • +
  • +

    Hypothesis 2: Increasing the replay buffer’s capacity may reduce the variance of the n-step returns.

    + +
      +
    • +

      This hypothesis is evaluated by training on environments with lesser variance or by turning off the sticky actions in the atari domain.

      +
    • +
    • +

      While the hypothesis does explain the gains by using n-step returns to some extent, n-step gains are observed even in environments with low variance.

      +
    • +
    +
  • +
diff --git a/_site/site/2020/09/14/MONet-Unsupervised-Scene-Decomposition-and-Representation.html b/_site/site/2020/09/14/MONet-Unsupervised-Scene-Decomposition-and-Representation.html new file mode 100644 index 00000000..311ec262 --- /dev/null +++ b/_site/site/2020/09/14/MONet-Unsupervised-Scene-Decomposition-and-Representation.html @@ -0,0 +1,91 @@ +

Introduction

+ +
    +
  • +

    The paper introduces Multi-Object Network (MONet) architecture that learns a modular representation of images by spatially decomposing scenes into objects and learning a representation for these objects.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Architecture

+ +
    +
  • +

    Two components:

    + +
      +
    • +

      Attention Module: generates spatial masks corresponding to the objects in the scene.

      +
    • +
    • +

      VAE: learn representation for each object.

      +
    • +
    +
  • +
  • +

    VAE components:

    + +
      +
    • +

      Encoder: It takes as input the image and the attention mask generated by the attention module and produce the parameters for distribution over latent variable z.

      +
    • +
    • +

      Decoder: It takes as input the latent variable z and attempts to reproduce the image.

      +
    • +
    +
  • +
  • +

    The decoder loss term is weighted by mask, i.e., the decoder tries to reproduce only those parts of the image that the attention mask focuses on.

    +
  • +
  • +

    The attention mechanism is auto-regressive with an ongoing state (called a scope) that tracks which parts of the image are not yet attended over.

    +
  • +
  • +

    In the last step, no attention mask is computed, and the previous scope is used as-is. This ensures that all the masks sum to 1.

    +
  • +
  • +

    The VAE also models the attention mask over the components, i.e., the probability that the pixels belong to a particular component.

    +
  • +
+ +

Motivation

+ +
    +
  • +

    A model could efficiently process compositional visual scenes if it can exploit some recurring structures in the scene.

    +
  • +
  • +

    The paper validates this hypothesis by showing that an autoencoder performs better if it can build up the scenes compositionally, processing one mask at a time (these masks are ground-truth spatial masks) rather than processing the scene at once.

    +
  • +
+ +

Results

+ +
    +
  • +

    VAE encoder parameterizes a diagonal Gaussian latent posterior with a spatial broadcast decoder that encourages the VAE to learn disentangled features.

    +
  • +
  • +

    MONet with seven slots is trained on Objects Room dataset with 1-3 objects.

    + +
      +
    • +

      It learns to generate different attention mask for different objects.

      +
    • +
    • +

      Combining the reconstructed components using the corresponding attention masks produces good quality reconstruction for the entire scene.

      +
    • +
    • +

      Since it is an autoregressive model, MONet can be evaluated for more slots. The model generalizes to novel scene configurations (not seen during training).

      +
    • +
    +
  • +
  • +

    On the Multi-dSprites dataset (modification of the dSprites dataset), the model (post-training) distinguishes individual sprites and background.

    +
  • +
  • +

    On the CLEVER data (2-10 objects per image), the model generates good image segmentation and reconstructions and can distinguish between overlapping shapes.

    +
  • +
diff --git a/_site/site/2020/09/21/Harvest,-Yield,-and-Scalable-Tolerant-Systems.html b/_site/site/2020/09/21/Harvest,-Yield,-and-Scalable-Tolerant-Systems.html new file mode 100644 index 00000000..9d5131dc --- /dev/null +++ b/_site/site/2020/09/21/Harvest,-Yield,-and-Scalable-Tolerant-Systems.html @@ -0,0 +1,82 @@ +

Introduction

+ +
    +
  • +

    A classic paper that looks into strategies for scaling large systems that can tolerate graceful degradation.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

CAP Theorem

+ +
    +
  • +

    CAP refers to strong Consistency, high Availability, and Partitionability.

    +
  • +
  • +

    Strong consistency refers to single copy ACID consistency.

    +
  • +
  • +

    High availability means any consumer can access the data anytime. Generally, this is achieved by adding one or more data replicas.

    +
  • +
  • +

    Partitionability means that the system can survive a partition between the different replicas.

    +
  • +
  • +

    Strong CAP theorem states that any system can have only two out of three properties.

    +
  • +
  • +

    Weak CAP theorem says that stronger are the guarantees about any two properties, weaker are the third property’s guarantees.

    +
  • +
+ +

Harvest, Yield, and CAP Theorem

+ +
    +
  • +

    Assume that the clients are making a request to a server.

    +
  • +
  • +

    There are two quantities of interest here:

    + +
      +
    • Yield - the probability of completing a request.
    • +
    • Harvest - completeness of answer to a query.
    • +
    +
  • +
  • +

    In the presence of faults, a tradeoff can is made between yield and harvest. This tradeoff applies to both read and update queries.

    +
  • +
+ +

Two strategies for scaling systems

+ +

Trading Harvest for Yield

+ +
    +
  • +

    In a hundred node cluster (without replication), a single-node failure reduces harvest by 1 %, and in the case of multi-node failure, the harvest degrades linearly.

    +
  • +
  • +

    The probability of losing high-priority data can be reduced by replicating it. However, replicating all the data would not n guarantee 100% harvest and yield despite significant costs.

    +
  • +
+ +

Application Decomposition and Orthogonal Mechanisms

+ +
    +
  • +

    Decompose a large application into subcomponents so that each component can be provisioned separately. Strong consistency can only be applied only on the components that need it, instead of the application as a whole.

    +
  • +
  • +

    Further, failure of one or more components need not cause the application to fail as a whole.

    +
  • +
  • +

    Decomposition also provides the opportunity to use orthogonal mechanisms, i.e., mechanisms independent of other mechanisms with no runtime interface.

    +
  • +
  • +

    Composition of orthogonal subsystems improves the robustness of runtime interactions by locally containing the errors. For example, the orthogonal components can be restarted /replaced independently without affecting other running components.

    +
  • +
diff --git a/_site/site/2020/09/28/A-Foliated-View-of-Transfer-Learning.html b/_site/site/2020/09/28/A-Foliated-View-of-Transfer-Learning.html new file mode 100644 index 00000000..04468877 --- /dev/null +++ b/_site/site/2020/09/28/A-Foliated-View-of-Transfer-Learning.html @@ -0,0 +1,72 @@ +

Introduction

+ +
    +
  • +

    The paper presents a formalism for transfer learning, offers a definition of relatedness between tasks, and proposes foliations as a mathematical framework to represent the relationship between tasks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Summary

+ +
    +
  • +

    The term representation denotes a mechanism for describing and realizing abstract objects, thus allowing manipulation and reasoning about the objects. This description goes beyond the usual meaning (in deep learning), where representation denotes some useful information about data.

    +
  • +
  • +

    Relatedness describes what changes between tasks. Consider a set of transformations (or functions) that convert one task to another. A relationship between two tasks is an element of this transformation set.

    +
  • +
  • +

    Given a transformation set, one can define a set of related tasks, which is the set of all the tasks that can be transformed into each other using the functions from the given transformation set. This set of tasks is an equivalence class, and the transformation set is the equivalence relationship.

    +
  • +
  • +

    Given two related tasks t1 and t2, denote the corresponding models (trained on those tasks) as m1 and m2. One can assume that m1 and m2 are related in the same way as t1 and t2 (equivariance).

    +
  • +
  • +

    Now, given a set of transformations, one can partition the space of continuous functions into non-overlapping spaces, which describe a set of related tasks. These spaces are referred to as the parallel spaces or transfer spaces.

    +
  • +
  • +

    The parallel space represents a lower dimension than the original space. So knowing which parallel space a model lies on can make it easier to find it. This is the primary motivation behind transfer learning - knowing the relationship between tasks can make it easier to find a solution to new tasks.

    +
  • +
  • +

    Another way of partitioning the set of transformations is to use tessellation (e.g., Voronoi diagrams). Tasks in the same partition are similar to each other as compared to a task from another partition.

    +
  • +
  • +

    Two tasks are defined as similar if the distance between them (under some distance metric) is small.

    +
  • +
  • +

    Similarity is a geometric notion, while relatedness is a transformative notion. Parallelized space is to relatedness what tessellation is to similarity.

    +
  • +
  • +

    The distinction between similarity and relatedness is quite nuanced, and the authors provide several examples to differentiate between them.

    +
  • +
  • +

    Similarity can only be measured in terms of a reference element (similar to what). For example, when one finetunes a pre-trained model on a new task, one assumes that the model’s pretraining task is similar to the current task.

    +
  • +
  • +

    Given a set (say T), a quantity (a function that maps elemenets of T to a k dimensional vector) is said to be invariant with respect to a transformation p (defined on T) if q(f) = q(p(f)) ie the value of f (belonging to T) does not change if f is transformed by p.

    +
  • +
  • +

    If one assumes that the set of transformations is a group, specifically a Lie group whose action on the set of tasks is locally free and regular, then one can define a parallel partitioning of the space of tasks and the space of models.

    +
  • +
  • +

    One can develop a hierarchial categorization scheme for the set of all considered tasks using the invariant quantities.

    +
  • +
  • +

    One can consider the space of tasks and models to be smooth manifolds as manifolds naturally give a notion of representation and transformations between them.

    +
  • +
  • +

    A manifold is a topological space that can be locally mapped to a Euclidean space using coordinate charts. One can define regular foliation by choosing charts that satisfy certain conditions. In that case, the manifold has immersed, connected, non-intersecting submanifolds called leaves.

    +
  • +
  • +

    The charts (that satisfies those conditions) give a set of rectified coordinates, where the notions of “which leaf a point is on” and “where on the leaf it is” are clearly separated.

    +
  • +
  • +

    Thus, foliation can provide the theoretical tools to work with parallel spaces.

    +
  • +
  • +

    How can the foliations be incorporated into theory and solutions for transfer learning is left aa future work.

    +
  • +
diff --git a/_site/site/2020/10/12/Remembering-for-the-Right-Reasons-Explanations-Reduce-Catastrophic-Forgetting.html b/_site/site/2020/10/12/Remembering-for-the-Right-Reasons-Explanations-Reduce-Catastrophic-Forgetting.html new file mode 100644 index 00000000..77e4a073 --- /dev/null +++ b/_site/site/2020/10/12/Remembering-for-the-Right-Reasons-Explanations-Reduce-Catastrophic-Forgetting.html @@ -0,0 +1,63 @@ +

Introduction

+ +
    +
  • The paper hypothesizes that catastrophic forgetting can happen if the model can not rely on “reasoning” used for an old datapoint. If that is the case, catastrophic forgetting may be alleviated when the model “remembers” why it made a prediction previously.
  • +
  • The paper presents a simple instantiation of this hypothesis, in the form of a technique called Remembering for the Right Reasons (RRR).
  • +
  • The idea is to store model explanations, along with previous examples in the replay buffer. During replay, an additional explanation loss is used, along with the regular replay loss.
  • +
  • Link to the paper
  • +
  • Link to the code
  • +
+ +

Setup

+ +
    +
  • The model is trained over a sequence of data distributions in the class-incremental learning setup. A single-head architecture is used so that the task ID is not required during inference.
  • +
  • Along with the standard replay buffer (\(M^{rep}\)) for the raw input examples (from different tasks), another replay buffer (\(M^{RRR}\)) is maintained for storing the “explanations” (in the form of saliency maps), corresponding to examples in \(M^{rep}\).
  • +
  • RRR is implemented as an L1 loss on the error between the saliency map generated after training on the current task and the saliency map in \(M^{RRR}\).
  • +
  • Saliency maps need to be generated while the model is training. This requirement rules out black-box saliency methods, which can be used only after training.
  • +
  • The gradient-based white-box explainability techniques that are used include: +
      +
    • Vanilla backpropagation - Perform a forward pass through the model and take the gradient of the given output class with respect to the input.
    • +
    • Backpropagation with SmoothGrad - Saliency maps generated using Vanilla backpropagation can be visually noisy. These maps can be improved by adding pixel-wise Gaussian noise to n copies of the image and averaging the resulting gradients. The paper used n=40.
    • +
    • Gradient-weighted Class Activation Mapping (Grad-CAM) - Uses gradients to determine the importance of feature map activations on a given prediction.
    • +
    +
  • +
  • RRR can be easily used with memory and regularization based approaches.
  • +
  • The paper combined RRR with the following standard Class Incremental Learning (CIL) models: + +
  • +
+ +

Experiments

+ +

Few-Shiot Class Incremental Learning

+ +
    +
  • C-way K-shot class incremental learning with C classes and K training samples per class and b base classes to learn as the first task.
  • +
  • Caltech-UCSD Birds dataset with 100 base classes and remaining 100 classes divided into ten tasks, with three samples per class. The test set is not changed.
  • +
  • In teems of saliency maps., Grad-CAM is better than Vanilla Backpropagation, which in turn is comparable to SmoothGrad. The same trend is seen in terms of memory overhead, with Grad-CAM having the least memory overhead.
  • +
  • Adding the RRR loss improves the performance of all the baselines.
  • +
+ +

Standard Class Incremental Learning

+ +
    +
  • CIFAR100 and ImageNet100 with a memory budget of 2000 samples.
  • +
  • Adding the RRR loss improves all the baselines’ performance, and the gains for ImageNet100 are more significant than the gains for CIFAR100.
  • +
+ +

How often does the model remember its decision for the right reason?

+ +
    +
  • The paper uses the Pointing Game (PG) experiment, which uses the ground truth image segmentation to define the true object region.
  • +
  • If the maximum attention location (in the predicted saliency map) falls inside the objects, it is considered a hit, else a miss. A hit on a previous example is considered a proxy for the model remembering its decision for the right reason.
  • +
  • The precision and recall are reported for the hit metric. Using RRR increases both precision (i.e., less often the model makes the correct decision without looking at the right evidence) and recall (i.e., less frequently does the model makes an incorrect decision, despite looking at the proper evidence).
  • +
diff --git a/_site/site/2020/10/19/Learning-Explanations-That-Are-Hard-To-Vary.html b/_site/site/2020/10/19/Learning-Explanations-That-Are-Hard-To-Vary.html new file mode 100644 index 00000000..b723cde7 --- /dev/null +++ b/_site/site/2020/10/19/Learning-Explanations-That-Are-Hard-To-Vary.html @@ -0,0 +1,86 @@ +

Introduction

+ +
    +
  • The paper builds on the principle “good explanations are hard to vary” to propose that invariant mechanisms can be identified by finding explanations (say model parameters) that are hard to vary across examples.
  • +
  • Link to the paper
  • +
  • Link to the code
  • +
+ +

Setup

+ +
    +
  • Collection of d different datasets (from different environments). Each dataset is a collection of input-target tuples.
  • +
  • Objective is to learn a function f (also called mechanism) to map the input to the target (for all the environments).
  • +
  • The standard approach is to pool the loss for examples corresponding to the different environments and perform gradient updates on this average-pooled loss.
  • +
  • In this standard gradient-based setup, the model may not learn invariances due to the following reasons: +
      +
    • Model learned the spurious features first, and now the training loss is too small.
    • +
    • The pooled loss is generally computed by summing (or averaging) the loss corresponding to individual examples. Thus the gradient for each example is calculated independently. Each sample can be thought of as a dataset of size 1, for which all the features are relevant.
    • +
    • Gradient descent with averaging (of gradients across the environments) greedily maximizes for the learning speed and not invariance.
    • +
    +
  • +
  • Performing arithmetic mean can be seen as performing an OR operation (i.e., the sum can be high if any one of the constituents is high), whereas performing geometric mean can be seen as performing an AND operation (i.e., the product can be high only if all the constituents are high).
  • +
+ +

Invariant Learning Consistency(ILC)

+ +
    +
  • Given an algorithm \(A\), let \(\theta_{A}^{*}\) denote the set of convergence points of \(A\) when trained on all the environments.
  • +
  • Each convergence point is associated with a consistency score.
  • +
  • Intuitively, given a convergence point and an environment e, find the set of parameters equivalent to the convergence point (in terms of loss) with respect to e. Let’s call this set as S.
  • +
  • Evaluate the points in this set for all the remaining environments. For the given convergence point, an environment e’ is consistent with e if the maximum difference in the loss for two environments is small, for all points belonging to S.
  • +
  • This idea is used to define the invariant learning consistency score for algorithm \(A\), which measures the expected consistency of the converged points (on the pooled data) across all the environments.
  • +
  • The paper shows that the converged points’ consistency is linked to the Hessians’ geometric mean and that for the convex quadratic case, using the elementwise geometric mean of gradients improves consistency.
  • +
  • However, there are some practical challenges: +
      +
    • Geometric mean is defined only when all signs are consistent. This issue can potentially be handled by treating different signs as 0.
    • +
    • There is very little flexibility in “partial” agreement, and even a single zero gradient component can stop optimization for that component. This can probably be handled by not masking if many environments have a gradient for that component.
    • +
    • Geometric component needs to be computed in the log-domain (for numerical scalability), but that can be computationally more expensive.
    • +
    • When using adaptive optimizers like Adam, the exact magnitude of geometric mean will be ignored because of rescaling for the local curvature adaptation.
    • +
    +
  • +
  • Some of these challenges can be handled using average gradients when the geometric mean would be 0 and masking out components based on the sign.
  • +
+ +

AND-mask

+ +
    +
  • The ideas from the previous section can be used to develop a practical algorithm called AND-mask.
  • +
  • Zero-out gradients that have inconsistent signs across some threshold number (hyper-parameter) of environments.
  • +
  • In the presence of purely random gradient patterns, the AND-mask decreases the signals’ strength exponentially fast.
  • +
+ +

Experiments

+ +

Synthetic Memorization Dataset

+ +
    +
  • This is a binary classification task with two kind of features: (i) “meaningful” features that are shared across environments but harder for the model to learn and (ii) “shortcut” features that are easy to learn but not shared across environments.
  • +
  • While the dataset may look simple, it is difficult to find the invariant mechanism because the “shortcut” features allow for a simple, linear decision boundary, with a large margin that is fast to learn, has perfect accuracy, robust to input noise, and no iid generalization gap.
  • +
  • Baselines: +
      +
    • MLPs trained with regularizers like dropout, L1, L2, and batch norm.
    • +
    • Domain Adversarial Neural Networks (DANN)
    • +
    • Invariant Risk Minimization (IRM)
    • +
    +
  • +
  • In terms of results, AND-mask with L1/L2 regularizers gives the best results.
  • +
  • Empirically, the paper shows that the signal from the “meaningful” features is present when the gradients are averaged, but their magnitude is much smaller than the signal from the “shortcut” features.
  • +
+ +

Experiments on CIFAR-10

+ +
    +
  • A ResNet model is trained on the CIFAR-10 dataset with random labels, with and without the AND-mask.
  • +
  • The model with the AND-mask did not memorize the data, whereas the model without the AND-mask did. As sanity, the paper ensured that both the models generalize well when trained with the original labels.
  • +
  • Note that for this experiment, every example was treated to have come from its own environment.
  • +
+ +

Behavioral Cloning on CoinRun

+ +
    +
  • Train an expert policy using PPO for 400M steps on the full distribution of levels.
  • +
  • Generate a dataset of state-action pairs. Training data consists of 1000 states from each of the 64 levels, while the test data comes from 2000 levels.
  • +
  • A ResNet18 model is used as an imitation learning policy.
  • +
  • The exact implementation of the AND-mask is a little more involved, but the key takeaway is that model trained with AND-mask identifies invariant mechanisms across different levels.
  • +
diff --git a/_site/site/2020/11/02/One-Solution-is-Not-All-You-Need-Few-Shot-Extrapolation-via-Structured-MaxEnt-RL.html b/_site/site/2020/11/02/One-Solution-is-Not-All-You-Need-Few-Shot-Extrapolation-via-Structured-MaxEnt-RL.html new file mode 100644 index 00000000..8c6c0253 --- /dev/null +++ b/_site/site/2020/11/02/One-Solution-is-Not-All-You-Need-Few-Shot-Extrapolation-via-Structured-MaxEnt-RL.html @@ -0,0 +1,119 @@ +

Introduction

+ +
    +
  • +

    Key idea: Practicing and remembering diverse solutions to a task can lead to robustness to that task’s variations.

    +
  • +
  • +

    The paper proposes a framework to implement this idea - train multiple policies such that they are collectively robust to a new distribution over environments while using a single training environment.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    During training, the agent has access to only one MDP.

    +
  • +
  • +

    During the evaluation, the agent encounters a new MDP which has the same state and action space but may have a different reward and transition function.

    +
  • +
  • +

    The agent is allowed some interactions (say k) with the test MDP and is then evaluated on the test MDP. The setup is referred to as few-shot robustness.

    +
  • +
+ +

Structured Maximum Entropy Reinforcement Learning (SMERL)

+ +
    +
  • +

    Represent a set of policies using a latent variable policy (i.e., a policy conditioned on a latent variable z).

    +
  • +
  • +

    This has two benefits: (i) Multiple policies can be represented by the same object, and (ii) diverse behaviors can be learned by encouraging the trajectories, corresponding to different z to be different, while being able to solve the task.

    +
  • +
  • +

    A diversity-inducing objective is used to encourage the agent to learn different trajectories for different z.

    +
  • +
  • +

    Specifically, the mutual information between p(Z) and marginal trajectory distribution for the latent variable policy is maximized, subject to the constraint that each policy achieves close to optimal returns in the train MDP.

    +
  • +
  • +

    The mutual information between p(Z) and marginal trajectory distribution for the latent variable policy is lower bounded by the sum of mutual information terms over individual states (appearing in the trajectory).

    +
  • +
  • +

    An unsupervised reward function is defined using the mutual information between states and latent variables.

    +
  • +
  • +

    \(r(s, a) = log(q_{\phi})(z\|s) - log(p(z))\) where \(q_{\phi}\) is a learned discriminator.

    +
  • +
  • +

    This unsupervised reward is optimized for only when the policy achieves close to an optimal return, i.e., the environment return is close to the optimal return. Otherwise, the agent optimizes only for the environment return.

    +
  • +
+ +

Implementation

+ +
    +
  • +

    SMERL is implemented using SAC with a latent variable maximum entropy policy.

    +
  • +
  • +

    The set of latent variables is a fixed discrete set \(Z\) and \(p(z)\) is set to be a uniform distribution over this set.

    +
  • +
  • +

    At the start of an episode, a \(z\) is sampled and used throughout the episode.

    +
  • +
  • +

    Discriminator \(q_{\phi}(z\|s)\) is trained to infer \(z\) from the visited states.

    +
  • +
  • +

    A baseline SAC agent is trained beforehand to evaluate if the current training policy achieves close to optimal environment return.

    +
  • +
  • +

    During the evaluation, the policy corresponding to each latent variable is executed in the test MDP, and the policy with the maximum return is returned.

    +
  • +
+ +

Theoretical Analysis

+ +
    +
  • +

    Given an MDP \(M\) and \(\epsilon>0\), the MDP robustness set is defined as the set of all MDPs \(M'\) where the optimal policy of \(M'\) produces the same trajectory distribution in \(M'\) as \(M\). Moreover, on the training MDP \(M\), the optimal policies (corresponding to \(M\) and \(M'\)) obtain similar returns.

    +
  • +
  • +

    The paper shows that SMERL generalizes to MDPs belong to the robustness set.

    +
  • +
  • +

    It also provides a simplified view of the optimization objective and shows how it naturally leads to a trajectory-centric mutual information objective.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Environments

    + +
      +
    • +

      2D navigation environments with point mass.

      +
    • +
    • +

      Mujoco Environments: HalfCheetah-Goal, Walker2d-Velocity, Hopper-Velocity.

      +
    • +
    +
  • +
  • +

    On the 2D navigation environment, the paper shows that SMERL learns to use different trajectories to reach the goal.

    +
  • +
  • +

    On the Mujoco setup, the evaluation shows that SMERL generally outperforms the best-performing baseline or is close to the best-performing baseline on different tasks.

    +
  • +
  • +

    Generally, higher train performance does not correlate with higher test performance, and there is no single policy that performs the best across all the tasks. Thus, it should be beneficial to learn multiple diverse policies that can be selected from during testing.

    +
  • +
diff --git a/_site/site/2020/11/09/Searching-for-Build-Debt-Experiences-Managing-Technical-Debt-at-Google.html b/_site/site/2020/11/09/Searching-for-Build-Debt-Experiences-Managing-Technical-Debt-at-Google.html new file mode 100644 index 00000000..a529dcfe --- /dev/null +++ b/_site/site/2020/11/09/Searching-for-Build-Debt-Experiences-Managing-Technical-Debt-at-Google.html @@ -0,0 +1,140 @@ +

Introduction

+ +
    +
  • +

    The paper describes the efforts to control and repay the technical debt in the build system at Google (called the Build Debt).

    +
  • +
  • +

    Guiding Principles:

    + +
      +
    • +

      Automate techniques to analyze and fix issues that contribute to technical debt.

      +
    • +
    • +

      Make it easier to do the right thing as developers can incur technical debt unknowingly.

      +
    • +
    • +

      Make it hard to do the wrong thing, e.g., by building stricter checks into the build process.

      +
    • +
    +
  • +
  • +

    Note that some of the metrics and design decisions may be outdated now (the paper was written in 2012). However, the core message is still relevant.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Google’s Build System Debt

+ +
    +
  • +

    BUILD files encapsulate the specifications for building software.

    +
  • +
  • +

    Generally, these files are maintained manually, and the dependencies may not be up-to-date over time.

    +
  • +
  • +

    In extreme cases, some of the build targets are not built for months. Such targets are called zombie targets.

    +
  • +
  • +

    Originally, any project could depend on any other project’s internal details, thus creating (sometimes unwanted) couplings.

    +
  • +
  • +

    If the lower-level project did not intend to expose some internal details, the unwanted couplings introduce technical debt and make it harder to modify the lower-level project.

    +
  • +
  • +

    One form of technical debt is the visibility debt or the cost of back-fitting visibility rules onto the existing build specifications to re-establish the appropriate encapsulations.

    +
  • +
  • +

    Another example of technical debt is dead code that can confuse the developers looking for useful APIs.

    +
  • +
+ +

Dependency Debt

+ +
    +
  • +

    Over-declared or underutilized dependencies can slow the build and testing of systems.

    +
  • +
  • +

    Under-declared dependencies can make the build process brittle and make it difficult to remove over-declared dependencies.

    +
  • +
  • +

    Potential solutions for over-declared dependencies include:

    + +
      +
    • +

      Setting aside some dedicated time for fixing build rules. But this approach is not automated, and potential breakages make it harder for developers to do the right thing.

      +
    • +
    • +

      Automatically add all the under-declared dependencies to the BUILD files. The system can raise an error if a direct dependency is missing, making it harder to do the wrong thing.

      +
    • +
    • +

      Automation can be applied for finding/reporting the over-declared dependencies as well.

      +
    • +
    +
  • +
  • +

    Potential solutions for underutilized dependencies include:

    + +
      +
    • +

      While it is challenging to automate fixing underutilized dependencies, automating the discovery of such dependencies is still useful.

      +
    • +
    • +

      Highlighting dependencies with high cost and low removal effort could incentivize developers to clean up their projects.

      +
    • +
    +
  • +
+ +

Zombie Targets

+ +
    +
  • +

    Zombie targets can be identified by query the results of build and test runs.

    +
  • +
  • +

    A target is marked as “dead” if the attempts to build it have failed for at least 90 days. Until then, build errors are considered to be transient.

    +
  • +
  • +

    A zombie target can be eliminated by deleting its definition from the BUILD and deleting the source files, which are reachable only via the zombie target.

    +
  • +
+ +

Visibility Debt

+ +
    +
  • +

    Originally, the default visibility of all the targets was public, leading to unintended dependencies.

    +
  • +
  • +

    The visibility of all the existing builds was set to legacy_public, and the default visibility was changed to private.

    +
  • +
  • +

    This encouraged developers to explicitly consider if they wanted other projects to depend on their project.

    +
  • +
+ +

Dead Flags

+ +
    +
  • +

    Google developed its command-line parsing utilities and defined a set of recognized command-line flags for libraries and binaries.

    +
  • +
  • +

    Overtime, the number of flags grew to half a million, and many of these flags are not useful anymore (i.e., dead).

    +
  • +
  • +

    These dead flags can it hard to understand and refactor code.

    +
  • +
  • +

    Existing flags are analyzed to check which ones have always been set to the same value and replaced by those contents, clearing about 150 thousand flags.

    +
  • +
  • +

    Removing dead flags also helps to clean up dead/unreachable code.

    +
  • +
diff --git a/_site/site/2020/11/16/Data-Management-for-Internet-Scale-Single-Sign-On.html b/_site/site/2020/11/16/Data-Management-for-Internet-Scale-Single-Sign-On.html new file mode 100644 index 00000000..8fe53e7f --- /dev/null +++ b/_site/site/2020/11/16/Data-Management-for-Internet-Scale-Single-Sign-On.html @@ -0,0 +1,127 @@ +

Introduction

+ +
    +
  • +

    The paper describes the architecture of an erstwhile single-sign-on (SSO) service used by Google, called Google Accounts (2006).

    +
  • +
  • +

    Note that some of the metrics and design decisions may be outdated now (the paper was written in 2006). However, the core message is still relevant.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Operational Constraints

+ +
    +
  • +

    SSO’s availability affects the availability of all applications that require user sign-in.

    +
  • +
  • +

    Generally, systems can achieve high availability by sacrificing consistency, but given the nature of SSO (matching username/passwords), providing an inconsistent view is not a good option, and single-copy consistency is a usability requirement.

    +
  • +
+ +

Berkeley DB

+ +
    +
  • +

    Berkeley DB is an embedded, high-performance, scalable, transactional storage system for key-value data and provides both keyed and sequential lookup.

    +
  • +
  • +

    It provides a primary copy replication model with a single writer (called master) and multiple read-only replicas.

    +
  • +
  • +

    All writes are sent to the master, which first applies the changes and then propagates them to the replicas.

    +
  • +
  • +

    The master and the replicas have identical logs, and in case of master failure, a new master is elected from the replicas.

    +
  • +
  • +

    Some synchronization may be needed between the replicas in case, e.g., the master dies in between a transaction.

    +
  • +
+ +

SSO Architecture

+ +
    +
  • +

    SSO service maps usernames to user account data and services to service-specific data.

    +
  • +
  • +

    The SSO database is partitioned into shards, where each shard is a replicated Berkeley DB (having 5 to 15 replicas).

    +
  • +
  • +

    Each replica stores the data in a B+-link tree data structure.

    +
  • +
  • +

    Consistent reads must go to the master, while non-master replicas can serve “ stale” reads.

    +
  • +
  • +

    In the case of larger replication groups (say 15 replicas), only a subset of replicas can become master (“electable replicas”).

    +
  • +
  • +

    In general, replicas are spread geographically to handle machine-failure, network-failure, and data center-failure.

    +
  • +
  • +

    Replicas in a share are kept close to reduce the communication latency, which affects the time to commit a write operation or electing a new master.

    +
  • +
  • +

    Some of the shards implement ID-map, i.e., map of username to userid and userid to shards.

    +
  • +
+ +

Database Integration

+ +
    +
  • Berkeley DB leaves decisions regarding quorums, leases, etc., up to the application.
  • +
+ +

Quorums

+ +
    +
  • +

    SSO chooses a quorum protocol that guarantees that updates are never lost.

    +
  • +
  • +

    For the write queries, the master waits for a positive acknowledgment from a majority of the replicas, including itself, before marking the query as completed.

    +
  • +
  • +

    When selecting a new leader, SSO requires a majority of replicas to agree. Moreover, Berkeley DB elections always choose a replica with the latest log entry during an election, thus guaranteeing that the new master’s log will include all the previous master’s updates.

    +
  • +
+ +

Leases

+ +
    +
  • +

    The master holds a master lease when responding to read queries and refreshes this lease periodically by communicating with a majority of replicas.

    +
  • +
  • +

    The lease guarantees that the master is not returning stale data if a partition or failure causes the master to lose its mastership, i.e., holding the lease guarantees that the master is still the master.

    +
  • +
  • +

    Moreover, elections can not be completed within the lease timeout interval.

    +
  • +
+ +

Replica Group Membership

+ +
    +
  • +

    SSO maintains a replica configuration containing the logical (DNS) name and IP address of each replica.

    +
  • +
  • +

    In case of any changes to the configuration, the changes are specified in a file that the master reads periodically.

    +
  • +
  • +

    If the configuration changes, the master initiates a configuration change and update the database.

    +
  • +
  • +

    Non-master replicas can get the new configuration from the database.

    +
  • +
  • +

    A new replica or a replica that lost state (say due to a failure) starts as a non-voting replica and can not participate in an election till it has caught up with the master as of the time the replica joined (again).

    +
  • +
diff --git a/_site/site/2020/11/23/Exploring-Simple-Siamese-Representation-Learning.html b/_site/site/2020/11/23/Exploring-Simple-Siamese-Representation-Learning.html new file mode 100644 index 00000000..f99c1500 --- /dev/null +++ b/_site/site/2020/11/23/Exploring-Simple-Siamese-Representation-Learning.html @@ -0,0 +1,107 @@ +

Introduction

+ +
    +
  • +

    The paper shows that Siamese networks can be used for unsupervised learning with images without needing techniques like negative sample pairs, large batch training, or momentum encoders. The training mechanism is referred to as the SimSiam method.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Method

+ +
    +
  • +

    Given an input image x, create two augmented views x1 and x2.

    +
  • +
  • +

    These views are processed by an encoder network f.

    +
  • +
  • +

    One of the views (say x1) is processed by the encoder f as well as a predictor MLP h to obtain a projection p1 ie p1 = h(f(x1)).

    +
  • +
  • +

    The second view (x2) is processed only by the encoder f to obtain an encoding z2 i.e., z2 = f(x2).

    +
  • +
  • +

    Negative cosine similarity is minimized between p1 and z2 with the catch that the resulting gradients are not used to update the encoder via z2. I.e., Loss = D(p1, stopgrad(z2)) where D is the negative cosine similarity and stopgrad is an operation that stops the flow of gradients.

    +
  • +
  • +

    In practice, both p1, z2 and p2, z1 pairs are used for computing the loss. ie Loss = 0.5 * (D(p1, stopgrad(z2)) + D(p2, stopgrad(z1))).

    +
  • +
+ +

Implementation Details

+ +
    +
  • +

    Encoder uses batch norm in all the layers (including output) while projection MLP uses batch norm only in the hidden layers.

    +
  • +
  • +

    SGD optimizer with learning rate as 0.05 * batchsize / 256, cosine learning rate decay schedule and SGD momentum = 0.9.

    +
  • +
  • +

    Unsupervised pretraining on the ImageNet dataset followed by training a supervised linear classifier on the frozen representations.

    +
  • +
+ +

Results

+ +
    +
  • +

    Stop-gradient operation is necessary to avoid a degenerate solution. Without stop-gradient, the model maps all inputs to a constant z.

    +
  • +
  • +

    If the projection layer is removed, the method does not work (because of the loss’s symmetric nature). If the loss is also made asymmetric, the method still does not work without the projection layer. However, asymmetric loss + projection layer works.

    +
  • +
  • +

    Keeping the projection layer fixed (i.e., not updating during training) avoids collapse but leads to poor validation performance.

    +
  • +
  • +

    Training the projection layer with a constant learning rate works better in practice, likely because the projection layer needs to keep adapting before the encoder layer is sufficiently trained.

    +
  • +
  • +

    The method works well across different batch sizes.

    +
  • +
  • +

    Removing batch norm layers from all the layers in all the networks does not lead to collapse, though the model’s performance degrades on the validation dataset. Adding batch norm to the hidden layers alone is sufficient.

    +
  • +
  • +

    Adding batch norm to the encoder’s output further improves the performance but adding batch norm to all the layers of all the networks makes the training unstable, with the loss oscillating.

    +
  • +
  • +

    Overall, while batch norm helps to improve performance, it is not sufficient to avoid collapse.

    +
  • +
  • +

    The setup does not collapse when the cross-entropy loss replaces the cosine loss.

    +
  • +
+ +

What is SimSiam solving?

+ +
    +
  • +

    Given that the stop-gradient operation seems to be the critical ingredient for avoiding collapse, the paper hypothesizes that SimSiam is solving a different optimization problem.

    +
  • +
  • +

    The hypothesis is that SimSiam is implementing an Expectation-Maximisation (EM) algorithm with two sets of variables and two underlying sub-problems.

    +
  • +
  • +

    The paper performs several experiments to test this hypothesis. For example, they consider k SGD steps for the first problem before performing an update for the second problem, showing that the alternating optimization is a valid formulation, of which SimSiam is a particular case.

    +
  • +
+ +

Comparison to other methods

+ +
    +
  • +

    SimSiam achieves the highest accuracy among SimCLR, MoCo, BYOL, and SwAV for training under 100 epochs. However, it lags behind other methods when trained longer.

    +
  • +
  • +

    SimSiam’s representations are transferable beyond the ImageNet tasks.

    +
  • +
  • +

    Adding projection layer and stop-gradient operator to SimCLR does not improve its performance.

    +
  • +
diff --git a/_site/site/2020/11/30/Consistency-Tradeoffs-in-Modern-Distributed-Database-System-Design.html b/_site/site/2020/11/30/Consistency-Tradeoffs-in-Modern-Distributed-Database-System-Design.html new file mode 100644 index 00000000..8f28c788 --- /dev/null +++ b/_site/site/2020/11/30/Consistency-Tradeoffs-in-Modern-Distributed-Database-System-Design.html @@ -0,0 +1,123 @@ +

Introduction

+ +
    +
  • +

    CAP theorem has been influential in the design decisions for distributed databases.

    +
  • +
  • +

    However, designers incorrectly assume that the CAP theorem “always” imposes restrictions in terms of the tradeoff between availability and consistency. In contrast, the tradeoff is applicable only in the case of partitions.

    +
  • +
  • +

    CAP theorem led to the development of highly available systems with reduced consistency models (and reduced ACID guarantees).

    +
  • +
  • +

    Another tradeoff - between latency and consistency - has also been influential for database design.

    +
  • +
  • +

    The paper unifies CAP and latency-consistency tradeoffs into a single formulation called PACELC.

    +
  • +
  • +

    Note that some of the observations, especially ones about the databases, may be outdated now (the paper was written in 2012). However, the core message is still relevant.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Latency-Consistency Tradeoff

+ +
    +
  • +

    Low latency (or high availability) means that the system must replicate data.

    +
  • +
  • +

    In case of an update query, three possibilities arise:

    + +
      +
    • +

      The system can choose to send data updates to all the replicas at once. This leads to two possibilities:

      + +
        +
      • +

        A replica can receive the update queries in an arbitrary order, thus breaking consistency with other replicas.

        +
      • +
      • +

        Alternatively, the replicas could use some protocol to agree on the order of updates. However, this can introduce latency.

        +
      • +
      +
    • +
    • +

      The update queries can be first sent to a master replica.

      + +
        +
      • +

        The master replica can apply the updates and send them to the other replicas using one of the following strategies:

        + +
          +
        • +

          Synchronous replication where the master waits for all the updates to be applied to a replica(s). However, this approach introduces latency.

          +
        • +
        • +

          Asynchronous replication where the master assumes the update to be complete before it completes. In this case, the latency-consistency tradeoff depends on how read queries are handled:

          + +
            +
          • +

            The system can send all read queries to the master. In this case, there are no consistency issues, but additional latency is introduced because all the read queries go to the same replica, thus potentially overloading it.

            +
          • +
          • +

            Alternatively, the read query can be served from any replica. While this improves read latency, the results can be inconsistent now.

            +
          • +
          +
        • +
        • +

          Use a mix of Synchronous and Asynchronous replication - i.e., some of the write queries are Synchronous, and others are Asynchronous. In this case, the latency-consistency tradeoff depends on how read queries are handled:

          + +
            +
          • +

            If the read is routed to at least one replica that has been Synchrnously updated, the consistency can be preserved, with additional latency for discovering the updated replica, etc.

            +
          • +
          • +

            If the read query can not be routed to an updated replica (maybe because none of the replicas is updated), then either latency suffers or inconsistent read can be performed.

            +
          • +
          +
        • +
        +
      • +
      +
    • +
    • +

      The update query is first sent to an arbitrary replica.

      + +
        +
      • This is the same as the previous case, with the query going to an arbitrary replica instead of the master replica, and suffers from the same latency issues as the last case.
      • +
      +
    • +
    +
  • +
  • +

    In a nutshell, the tradeoff between latency and consistency is always present, irrespective of network failure.

    +
  • +
  • +

    This contrasts with the CAP theorem, which imposes the tradeoff between availability and consistency only in the case of a network partition.

    +
  • +
+ +

PACELC

+ +
    +
  • +

    If there is a partition (P), how does the system tradeoff availability (A) and consistency (C); else (E), when the system is running without failures, how does the system tradeoff latency (L) and consistency (C)?

    +
  • +
  • +

    The latency-consistency tradeoff (ELC) is relevant only when the data is replicated.

    +
  • +
  • +

    Default versions of Dynamo, Cassandra, and Riak were PA/EL systems, i.e., if a partition occurs, availability is prioritized. In the absence of partition, lower latency is prioritized.

    +
  • +
  • +

    Fully ACID systems (VoltDB, H-Store, and Megastore) and others like BigTable and HB are PC/EC, i.e., they prioritize consistency and give up availability and latency.

    +
  • +
  • +

    MongoDB can be classified as a PA/EC system, while PNUTS is a PC/EL system.

    +
  • +
diff --git a/_site/site/2020/12/07/CAP-twelve-years-later-How-the-rules-have-changed.html b/_site/site/2020/12/07/CAP-twelve-years-later-How-the-rules-have-changed.html new file mode 100644 index 00000000..efce6723 --- /dev/null +++ b/_site/site/2020/12/07/CAP-twelve-years-later-How-the-rules-have-changed.html @@ -0,0 +1,154 @@ +

Introduction

+ +
    +
  • +

    The CAP theorem states that any system sharing data over the network can only have at most two (out of three) desirable properties:

    + +
      +
    • +

      consistency (C), i.e., a single, up-to-date copy of the data;

      +
    • +
    • +

      high availability (A) of that data (for updates); and

      +
    • +
    • +

      tolerance to network partitions (P).

      +
    • +
    +
  • +
  • +

    This “2 of 3” formulation is misleading as it oversimplifies the interplay between properties.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

ACID vs. BASE

+ +
    +
  • +

    ACID is a design philosophy that focuses on consistency as reflected in the traditional relational databases.

    +
  • +
  • +

    The four properties in ACID are:

    + +
      +
    • +

      Atomicity (A), i.e., the operations are atomic, and either the entire operation succeeds or none of it succeeds.

      +
    • +
    • +

      Consistency (C), i.e., a transaction preserves all the rules. Note that the consistency in CAP is a subset of consistency in ACID.

      +
    • +
    • +

      Isolation (I), i.e., transactions occur in isolation and do not affect each other.

      +
    • +
    • +

      Durability (D), i.e., the transactions are durable irrespective of system failure.

      +
    • +
    +
  • +
  • +

    BASE is an alternate design philosophy that focuses on availability as reflected in the NoSQL databases.

    +
  • +
  • +

    The four properties in BASE are:

    + +
      +
    • +

      Basic Availability (BA), i.e., the database appears to work most of the time.

      +
    • +
    • +

      Soft state (S), i.e., the system’s state can change over time as it becomes eventually consistent.

      +
    • +
    • +

      Eventual consistency (E), i.e., the system will eventually become consistent over time.

      +
    • +
    +
  • +
+ +

CAP confusion

+ +
    +
  • +

    Generally, partitionability is seen as a must-have, thus reducing the choice to be between availability and consistency.

    +
  • +
  • +

    This view is somewhat misleading because the choice between C, A, and P is not binary but granular.

    +
  • +
  • +

    The choice between C and A can occur at various granularity levels, and different components (of a larger system) can prioritize different aspects.

    +
  • +
  • +

    Similarly, the CAP theorem generally ignores latency even though it is closely related to partitionability. For example, failing to achieve consistency within a time-bound (i.e., latency) implies a partition.

    +
  • +
  • +

    In general, there is no global notion of partition - some subset of nodes may experience a partition, and others may not.

    +
  • +
  • +

    Once a partition is detected, the system can then choose between C and A.

    +
  • +
+ +

Managing Partitions

+ +
    +
  • +

    Three-step process for managing partitions:

    + +
      +
    • +

      Detect the start of a partition.

      +
    • +
    • +

      Enter an explicit partition mode that may limit some operations.

      + +
        +
      • +

        Possible strategies:

        + +
          +
        • +

          Reduce availability by limiting some operations.

          +
        • +
        • +

          Record extra information that can be used during partition recovery.

          +
        • +
        +
      • +
      • +

        The strategy depends on the invariants that the system should maintain.

        +
      • +
      • +

        For example, if the invariant is that the keys (in a table) should be unique, the system could allow duplicate keys for some time and perform a de-duplication step during partition recovery.

        +
      • +
      • +

        A counterexample is a monetary transaction (e.g., charging a credit card). In such cases, the system could disable the operation and record it for performing later. Sometimes this “unavailability” is not visible to the user.

        +
      • +
      • +

        History of operations (over replicas across different partitions) can be tracked using version vectors of the form (node, logical time). The system can easily recreate the order in which they were executed (or mark them as being concurrent).

        +
      • +
      +
    • +
    • +

      Initiate partition recovery when communication is restored and make the state across the partitions consistent.

      +
    • +
    • +

      One common approach is to revert to the state when the partition was detected and apply the operations consistently across all the replicas.

      +
    • +
    • +

      This may require some extra effort to merge conflicts.

      +
    • +
    • +

      One workaround can be to constrain the use of certain operations so that the system does not encounter merge conflicts during recovery.

      +
    • +
    • +

      Sometimes, certain invariants may be violated when the system is in the partition mode and needs to be fixed during recovery.

      +
    • +
    • +

      The key takeaway is that when partitions exist, the choice between availability and consistency is not binary, and both can be optimized for.

      +
    • +
    +
  • +
diff --git a/_site/site/2020/12/14/Cassandra-a-decentralized-structured-storage-system.html b/_site/site/2020/12/14/Cassandra-a-decentralized-structured-storage-system.html new file mode 100644 index 00000000..3bfd749e --- /dev/null +++ b/_site/site/2020/12/14/Cassandra-a-decentralized-structured-storage-system.html @@ -0,0 +1,101 @@ +

Introduction

+ +
    +
  • Cassandra is a distributed storage system that runs over cheap commodity servers and handles high write throughput while maintaining low latency for read operations.
  • +
  • At the time of writing, it was used to support the search for Facebook Inbox.
  • +
  • Link to the paper
  • +
  • Link to the implementation
  • +
+ +

Data Model

+ +
    +
  • A table is a distributed multidimensional map.
  • +
  • The key is a string (generally 16-36 bytes long), while the value is a structured object.
  • +
  • Every operation under a single row key is atomic per replica.
  • +
  • Columns are grouped together into sets called column families.
  • +
  • There are two types of columns families: +
      +
    • Simple families.
    • +
    • Super column families: visualized as a column family within a column family.
    • +
    +
  • +
  • Columns can be sorted by name or time (used to display results in time sorted order).
  • +
  • The API supports insert, get and delete operations.
  • +
+ +

System Architecture

+ +

Handling Requests

+ +
    +
  • Any read/write request gets routed to any node in the cluster. The node determines the replicas for a given key and routes the request.
  • +
  • For write query, the system waits for a quorum of replicas to acknowledge the writes’ completion.
  • +
  • For read query, the system either routes the requests to the closest replica (might fetch stale results) or routes the requests to all replicas and waits for a quorum of responses.
  • +
+ +

Partitioning

+ +
    +
  • Cassandra partitions data across the cluster using consistent hashing with an order-preserving hash function.
  • +
  • The hash function’s output range is treated as a fixed circular ring, and each node is assigned a random position on the ring.
  • +
  • An incoming request specifies a key used to route requests.
  • +
  • One benefit of this approach is that the addition/removal of a node only affects its immediate neighbors.
  • +
  • However, randomly assigning nodes leads to non-uniform data and load distribution.
  • +
  • Cassandra uses the load information and moves lightly loaded nodes to reduce the load on other nodes.
  • +
+ +

Replication

+ +
    +
  • Each data item is replicated at N hosts, where N is the per-instance replication factor.
  • +
  • Cassandra supports the following replication policies: Rack Unaware, Rack Aware (within a datacenter), and Datacenter Aware.
  • +
  • For “Rack Aware” and “Datacenter Aware” strategies, Zookeeper elects a leader among the nodes and holds metadata about which range a node is responsible for.
  • +
  • In case of node failure and network partitions, the quorum requirements are relaxed.
  • +
+ +

Membership

+ +
    +
  • Cluster membership is based on Scuttlebutt, a very efficient anti-entropy Gossip based mechanism.
  • +
  • Cassandra uses a modified version of $\phi$ Accrual Failure Detector for detecting failures, which provides the suspicion level (of failure) for each node.
  • +
+ +

Bootstrapping

+ +
    +
  • A node, starting for the first time, chooses a random position in the ring.
  • +
  • This information is persisted on the local disk, on Zookeeper, and gossiped around the cluster (so any node can route any query in the cluster).
  • +
  • During bootstrapping, the newly joined node reads a list of contact points (within the cluster) using a configuration file.
  • +
+ +

Local Persistence

+ +
    +
  • Generally, a write operation involves a write into a commit log (for durability and recoverability), followed by a write into the in-memory data structures.
  • +
  • A read operation starts with querying the in-memory data and then looks into the filesystem.
  • +
  • Read queries on the filesystem use bloom filters.
  • +
  • Column indices are maintained to make it faster to look up relevant columns.
  • +
+ +

Implementation Details

+ +
    +
  • Components implemented in Java.
  • +
  • System control messages use UDP while messages for replication and request routing uses TCP.
  • +
  • A new commit log is rolled out after the older one exceeds 128MB of size.
  • +
  • All the data is indexed using a primary key.
  • +
  • Data on the disk is chunked into sequences of blocks. Each block contains at most 128 keys and is demarcated by a block index.
  • +
  • When the data is written to the disk, a block index is generated and maintained in the memory for faster access.
  • +
  • A compaction process is performed to merge multiple files (on disk) into one file.
  • +
+ +

Practical Experience

+ +
    +
  • Data from MySQL servers is added to Cassandra using MapReduce processes.
  • +
  • Although Cassandra is a completely decentralized system, adding some coordination (via Zookeeper) is helpful.
  • +
  • For Inbox Search, a per-user index is maintained for all the messages.
  • +
  • For “term search”, the key is the userid, and the words in the message become the super column.
  • +
  • For searching all the messages ever sent/received by a user, the key is the userid, and the recipient ids are the super columns.
  • +
diff --git a/_site/site/2020/12/21/Design-patterns-for-container-based-distributed-systems.html b/_site/site/2020/12/21/Design-patterns-for-container-based-distributed-systems.html new file mode 100644 index 00000000..1d569013 --- /dev/null +++ b/_site/site/2020/12/21/Design-patterns-for-container-based-distributed-systems.html @@ -0,0 +1,68 @@ +

Introduction

+ +
    +
  • The paper describes three design patterns for container-based distributed systems: single-container pattern, single-node pattern, and multi-node pattern.
  • +
  • Link to the paper
  • +
+ +

Single-container management patterns

+ +
    +
  • Traditionally, containers have exposed three functions: run, pause and stop.
  • +
  • A richer API can be implemented to provide fine-grained control to system developers and operators.
  • +
  • Similarly, much more application information (including monitoring metrics) can be exposed.
  • +
  • The container interface can be used to define a contract for a complex lifecycle. For example, instead of arbitrarily shutting down the container, the system could signal that it will be terminated, giving the container some time to perform some cleanup/follow-up actions.
  • +
+ +

Single-node, multi-container pattern

+ +

Sidecar pattern

+ +
    +
  • Multiple containers extend and enhance the main container.
  • +
  • For example, a web-server serves from the local disk (main container) while a side container updates the data.
  • +
  • Benefits: +
      +
    • independent development, deployment, and scaling of containers
    • +
    • possibility of combining different type of containers
    • +
    • failure containment boundary, i.e., one failing container, need not bring down the entire system.
    • +
    +
  • +
+ +

Ambassador pattern

+ +
    +
  • Proxy communication to and from the main container with the ambassador hiding the complexities of communication with a distributed (multi-shard system) that may be written in a different language.
  • +
+ +

Adapter pattern

+ +
    +
  • Standardize output and interfaces across the containers to provide a simple, homogenized view to external applications.
  • +
  • A common example is using a single tool for collecting/processing metrics from multiple applications.
  • +
  • This is different from the adapter pattern, which aims to provide a simplified view of the external world to an application.
  • +
+ +

Multi-node application patterns

+ +

Leader election pattern

+ +
    +
  • In a sharded (or replication-based) system, the system may have to elect a leader (or multiple leaders) among the replicas (or shards).
  • +
  • Instead of using a leader election library, a leader election container can be used (that communicates with other containers over, say, HTTP). This removes the restriction of using a leader election library compatible with the containers (e.g., using the same language).
  • +
+ +

Work queue pattern

+ +
    +
  • A work coordinator container can queue different containers, each of which may have a different implementation or dependencies, thus removing the restriction that all the works use the same runtime.
  • +
+ +

Scatter/gather pattern

+ +
    +
  • An external client sends a request to a root container.
  • +
  • This container fans out the request to many containers that may perform the computation in parallel.
  • +
  • The root container gathers these parallel computations’ results and aggregates them into a response to the external client.
  • +
diff --git a/_site/site/2021/01/04/Compositional-Explanations-of-Neurons.html b/_site/site/2021/01/04/Compositional-Explanations-of-Neurons.html new file mode 100644 index 00000000..a1a6fb08 --- /dev/null +++ b/_site/site/2021/01/04/Compositional-Explanations-of-Neurons.html @@ -0,0 +1,155 @@ +

Introduction

+ +
    +
  • +

    The paper describes a method to explain/interpret the representations learned by individual neurons in deep neural networks.

    +
  • +
  • +

    The explanations are generated by searching for logical forms defined by a set of composition operators (like OR, AND, NOT) over primitive concepts (like water).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Generating compositional explanations

+ +
    +
  • +

    Given a neural network f, the goal is to explain a neuron’s behavior (of this network) in human-understandable terms.

    +
  • +
  • +

    Previous work builds on the idea that a good explanation is a description that identifies the inputs for which the neuron activates.

    +
  • +
  • +

    Given a set of pre-defined atomic concepts $c \in C$ and a similarity measure $\delta(n, c)$ where $n$ represents the activation of the $n^{th}$ neuron, the explanation, for the $n^{th}$ neuron, is the concept most similar to $n$.

    +
  • +
  • +

    For images, a concept could be represented as an image segmentation map. For example, the water concept can be represented by the segments of the images that show water.

    +
  • +
  • +

    The similarity can be measured by first thresholding the neuron activations (to get a neuron mask) and then computing the IoU score (or Jaccard Similarity) between the neuron mask and the concept.

    +
  • +
  • +

    One limitation of this approach is that the explanations are restricted to pre-defined concepts.

    +
  • +
  • +

    The paper expands the set of candidate concepts by considering the logical forms of the atomics concepts.

    +
  • +
  • +

    In theory, the search space would explode exponentially. In practice, it is restricted to explanations with at most $N$ atomics concepts, and beam search is performed (instead of exhaustive search).

    +
  • +
+ +

Setup

+ +
    +
  • +

    Image Classification Setup

    + + +
  • +
  • +

    NLI Setup

    + +
      +
    • +

      BiLSTM baseline followed by MLP layers trained on Stanford Natural Language Inference (SNLI) corpus.

      +
    • +
    • +

      Probing the penultimate hidden layer (of the MLP component) for sentence-level explanations.

      +
    • +
    • +

      Concepts are created using the 2000 most common words in the validation split of the SNLI dataset.

      +
    • +
    • +

      Additional concepts are created based on the lexical overlap between premise and hypothesis.

      +
    • +
    +
  • +
+ +

Do neurons learn compositional concepts

+ +
    +
  • +

    Image Classification Setup

    + +
      +
    • +

      As $N$ increases, the mean IoU increases (i.e., the explanation quality increases) though the returns become diminishing beyond $N=10$.

      +
    • +
    • +

      Manual inspection of 128 neurons and their length 10 explanations show that 69% neurons learned some meaningful combination of concepts, while 31% learned some unrelated concepts.

      +
    • +
    • +

      The meaningful combination of concepts include:

      + +
        +
      • +

        perceptual abstraction that is also lexically coherent (e.g., “skyscraper OR lighthouse OR water tower”).

        +
      • +
      • +

        perceptual abstraction that is not lexically coherent (e.g., “cradle OR autobus OR fire escape”).

        +
      • +
      • +

        specialized abstraction of the form L1 AND NOT L2 (e.g. (water OR river) AND NOT blue).

        +
      • +
      +
    • +
    +
  • +
  • +

    NLI Setup

    + +
      +
    • +

      As $N$ increases, the mean IoU increases (as in the image classification setup) though the IoU keeps increasing past $N=30$.

      +
    • +
    • +

      Many neurons correspond to lexical features. For example, some neurons are gender-sensitive or activate for verbs like sitting, eating or sleeping. Some neurons are activated when the lexical overlap between premise and hypothesis is high.

      +
    • +
    +
  • +
+ +

Do interpretable neurons contribute to model accuracy?

+ +
    +
  • +

    In image classification setup, the more interpretable the neuron is, the more accurate is the model (when the neuron is active).

    +
  • +
  • +

    However, the opposite trend is seen in NLI models. i.e., the more interpretable neurons are less accurate.

    +
  • +
  • +

    Key takeaway - interpretability (as measured by the paper) is not correlated with performance. Given a concept space, the identified behaviors may be correlated or anti-correlated with the model’s performance.

    +
  • +
+ +

Targeting explanations to change model behavior

+ +
    +
  • +

    The idea is to construct examples that activate (or inhibit) certain neurons, causing a change in the model’s predictions.

    +
  • +
  • +

    These adversarial examples are referred to as “copy-paste” adversarial examples.

    +
  • +
  • +

    For example, the neuron corresponding to “(water OR river) AND (NOT blue)” is a major contributor for detecting “swimming hole” classes. An adversarial example is created by making the water blue. This prompts the model to predict “grotto” instead of “swimming hole.”

    +
  • +
  • +

    Similarly, in the NLI model, a neuron detects the word “nobody” in the hypothesis as highly indicative of contradiction. An adversarial example can be created by adding the word “nobody” to the hypothesis, prompting the model to predict contradiction while the true label should be neutral.

    +
  • +
  • +

    These observations support the hypothesis that one can use explanations to create adversarial examples.

    +
  • +
diff --git a/_site/site/2021/01/11/GPipe-Easy-Scaling-with-Micro-Batch-Pipeline-Parallelism.html b/_site/site/2021/01/11/GPipe-Easy-Scaling-with-Micro-Batch-Pipeline-Parallelism.html new file mode 100644 index 00000000..02fc15f4 --- /dev/null +++ b/_site/site/2021/01/11/GPipe-Easy-Scaling-with-Micro-Batch-Pipeline-Parallelism.html @@ -0,0 +1,70 @@ +

Introduction

+ +
    +
  • +

    The paper introduces GPipe, a pipeline parallelism library for scaling networks that can be expressed as a sequence of layers.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Design

+ +
    +
  • +

    Consider training a deep neural network with L layers using K accelerators (say GPUs).

    +
  • +
  • +

    Each of the ith layer has its forward function fi, backward function bi, weights wi and a cost ci (say the memory footprint or computational time).

    +
  • +
  • +

    GPipe partitions this network into K cells and places the ith cell on the ith accelerator. Output from the ith accelerator is passed to the i+1th accelerator as input.

    +
  • +
  • +

    During the forward pass, the input batch (of size N) is divided into M equal micro-batches. These micro-batches are pipelined through the K accelerators one after another.

    +
  • +
  • +

    During the backward pass, gradients are computed for each micro-batch. The gradients are accumulated and applied at the end of each minibatch.

    +
  • +
  • +

    In batch normalization, the statistics are computed over each micro-batch (used during training) and mini-batch (used during evaluation).

    +
  • +
  • +

    Micro-batching improves over the naive mode parallelism approach by reducing the underutilization of resources (due to the network’s sequential dependencies).

    +
  • +
+ +

Performance Optimization

+ +
    +
  • +

    GPipe supports re-materialization (or checkpointing), i.e., during the forward pass, only the output activations (at partition boundaries) are stored.

    +
  • +
  • +

    During backward pass, the forward function is recomputed at each accelerator. This trades off the memory requirement with increased time.

    +
  • +
  • +

    One potential downside is that partitioning can introduce some idle time per accelerator (referred to as the bubble overhead). However, with a sufficiently large number of micro-batches ( more than 4 times the number of partitions), the bubble overhead is negligible.

    +
  • +
+ +

Performance Analysis

+ +
    +
  • +

    Two different types of model architectures are compared: AmoebaNet convolutional model and Transformer sequence-to-sequence model.

    +
  • +
  • +

    For AmoebaNet, the size of the largest trainable model (on a single 8GB Cloud TPU v2) increases from 82M to 318M. Further, a 1.8 billion parameter model can be trained on 8 accelerators (25x improvement in size using GPipe).

    +
  • +
  • +

    For transformers, GPipe scales the model size to 83.9 B parameters with 128 partitions (298x improvement in size compared to a single accelerator).

    +
  • +
  • +

    Since the computation is evenly distributed across transformer layers, the training throughput scales almost linearly with the number of devices.

    +
  • +
  • +

    Quantitative experiments on ImageNet and multilingual machine translation show that models can be effectively trained using GPipe.

    +
  • +
diff --git a/_site/site/2021/01/18/Energy-based-Models-for-Continual-Learning.html b/_site/site/2021/01/18/Energy-based-Models-for-Continual-Learning.html new file mode 100644 index 00000000..3ac8b8ab --- /dev/null +++ b/_site/site/2021/01/18/Energy-based-Models-for-Continual-Learning.html @@ -0,0 +1,134 @@ +

Introduction

+ +
    +
  • +

    The paper proposes to use Energy-based Models (EBMs) for Continual Learning.

    +
  • +
  • +

    In classification tasks, the standard approach uses a cross-entropy objective function along with a normalized probability distribution.

    +
  • +
  • +

    However, cross-entropy reduces all negative classes’ likelihood when updating the model for a given sample, potentially leading to catastrophic forgetting.

    +
  • +
  • +

    Classification can be seen as learning an EBM across separate classes.

    +
  • +
  • +

    During an update, the energy for a pair of samples and its ground truth class decreases while the energy corresponding to the pairs of sample and negative classes increases.

    +
  • +
  • +

    Unlike the cross-entropy loss, EBMs allow choosing the negative classes to update.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Applications of EBMs for Continual Learning

+ +
    +
  • +

    EBMs can be used for class-incremental learning without requiring a replay-buffer or generative model for replay.

    +
  • +
  • +

    EBMs can be used for continual learning in setups without task boundaries, i.e., setups where the data distribution can change without a clear separation between tasks.

    +
  • +
+ +

EBMs

+ +
    +
  • +

    Boltzman distribution is used to define the conditional likelihood of label $y$, given an input $x$. ie, $p(y|x) = \frac{exp(E(x, y))}{Z(x)}$ where $Z(x) = \sum_{y \in Y}(-E(x, y))$. Here $E$ is the learnt energy function that maps an input-label pair to a scalar energy value.

    +
  • +
  • +

    During training, the contrastive divergence loss is used.

    +
  • +
  • +

    During inference, the class, for which the input-class pair has the least energy, is selected as the predicted class.

    +
  • +
+ +

EBMs for Continual Learning

+ +

Selection of Negative Samples

+ +
    +
  • +

    The paper considers several strategies for the selection of negative samples:

    + +
      +
    • +

      one negative class per sample. The negative class is sampled from the current batch of data. This selection approach performs best.

      +
    • +
    • +

      all the negative classes in a batch are used for creating the negative samples.

      +
    • +
    • +

      all the classes seen so far in training are used as the negative samples. This approach works the worst in practice.

      +
    • +
    +
  • +
  • +

    Given the flexibility of sampling the negative classes, EBMs can be used in the boundary-agnostic setups (where the data distribution can change smoothly without an explicit task boundary).

    +
  • +
+ +

Energy Network

+ +
    +
  • +

    EBMs take both the sample and the class as the input. The class can be treated as an attention filter to select the most relevant information between the sample and the class.

    +
  • +
  • +

    In theory, EBMs can train for any number of classes without knowing the number of classes beforehand. This is an advantage over the softmax-based approaches, where adding new classes requires changing the size of the softmax output layer.

    +
  • +
+ +

Inference

+ +
    +
  • During inference, all the classes seen so far are evaluated via the energy function. The class, which corresponds to the least energy sample-class pair, is returned as the selected class.
  • +
+ +

Experiments

+ +

Datasets

+ +
    +
  • +

    Split MNIST

    +
  • +
  • +

    Permuted MNIST

    +
  • +
  • +

    CIFAR-10

    +
  • +
  • +

    CIFAR-100

    +
  • +
+ +

Results in Boundary-Aware Setting

+ +
    +
  • +

    The paper outperforms the standard continual learning approaches that neither uses a replay-buffer nor a generative model.

    +
  • +
  • +

    Additionally, the paper shows that for the same number of parameters, the effective capacity of EMB models is higher than the effective capacity of standard classification models.

    +
  • +
  • +

    The paper also shows that standard classification models tend to assign a high probability to new classes for both old and new data. EBMs assign the probability more uniformly (and correctly) across the classes.

    +
  • +
  • +

    In an ablation study, the paper shows that both label conditioning and contrastive divergence loss help in improving the performance of EBMs.

    +
  • +
+ +

Results in Boundary-Agnostic Setting

+ +
    +
  • The performance gains in the boundary-agnostic setting are even more significant than the improvements in the boundary-aware setting.
  • +
diff --git a/_site/site/2021/01/25/HyperNetworks.html b/_site/site/2021/01/25/HyperNetworks.html new file mode 100644 index 00000000..e50b0f77 --- /dev/null +++ b/_site/site/2021/01/25/HyperNetworks.html @@ -0,0 +1,83 @@ +

Introduction

+ + + +

Approach

+ +

Static HyperNetworks - HyperNetworks for CNNs

+ +
    +
  • +

    Consider a $D$ layer CNN where the parameters for the $j^{th}$ layer are stored in a matrix $K^j$ of the shape $N_{in}f_{size} \times N_{out}f_{size}$.

    +
  • +
  • +

    The HyperNetwork is implemented as a two-layer linear network where the input is a layer embedding $z^j$, and the output is $K^j$.

    +
  • +
  • +

    The first layer (of the HyperNetwork) maps the input to $N_{in}$ different outputs using $N_{in}$ weight matrices.

    +
  • +
  • +

    The second layer maps the different $N_{in}$ inputs to $K_{i}$ using a shared matrix. The resulting $N_{in}$ (number of) $K_{i}$ matrices are concatenated to obtain $K^j$.

    +
  • +
  • +

    As a side note, HyperNetworks have much fewer params than the network for which it produces weights.

    +
  • +
  • +

    In a general case, the kernel dimensions (across layers) are not of the same size but integer multiples of some basic sizes. In that case, the HyperNetwork can generate kernels for the basic size, which can be concatenated to form larger kernels. This would require additional input embeddings but not require a change in the architecture of HyperNetwork.

    +
  • +
+ +

Dynamic HyperNetworks - HyperNetworks for RNNs

+ +
    +
  • +

    HyperRNNs/HyperLSTMs denote HyperNetworks that generates weights for RNNs/LSTMs.

    +
  • +
  • +

    HyperRNNs implement a form of relaxed weight sharing - an alternative to the full weight sharing of the traditional RNNs.

    +
  • +
  • +

    At any timestamp $t$, the input to the HyperRNN is the concatenated vector $x_{t}$ (input to the RNN at time $t$) and the hidden state $h_{t-1}$ of the RNN. The output is the weight for the main RNN at timestep $t$.

    +
  • +
  • +

    In practice, a weight scaling vector $d$ is used to reduce the memory footprint, which would otherwise be $dim$ times the memory of a standard RNN. $dim$ is the dimensionality of the embedding vector $z_j$.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    HyperNetworks are used to train standard CNNs for MNIST and ResNets for CIFAR 10. In these experiments, HyperNetworks slightly underperform the best performing models but uses much fewer parameters.

    +
  • +
  • +

    HyperLSTMs trained on the Penn Treebank dataset and Hutter Prize Wikipedia dataset outperform the stacked LSTMs and perform similar to layer-norm LSTMs. Interestingly, using HyperLSTMs with layer-norm improves performance over HyperLSTMs.

    +
  • +
  • +

    Given the similar performance of HyperLSTMs and layer-norm LSTMs, the paper conducted an ablation study to understand if HyperLSTMs learned a weight adjustment policy similar to the statistics-based approach used by layer-norm LSTMs.

    + +
      +
    • However, the analysis of the histogram of the hidden states suggests that using layer-norm reduces the saturation effect while in HyperLSTMs, the cell is saturated most of the time. This indicates that the two models are learning different policies.
    • +
    +
  • +
  • +

    HyperLSTMs are also evaluated for handwriting sequence generation by training in the IAM online handwriting dataset.

    + +
      +
    • While HyperLSTMs are quite effective on this task, combining them with layer-norm degrades the performance.
    • +
    +
  • +
  • +

    On the WMT’14 En-to-Fr machine translation task, HyperLSTMs outperform LSTM based approaches.

    +
  • +
diff --git a/_site/site/2021/02/01/Zero-shot-Learning-by-Generating-Task-specific-Adapters.html b/_site/site/2021/02/01/Zero-shot-Learning-by-Generating-Task-specific-Adapters.html new file mode 100644 index 00000000..59cbe826 --- /dev/null +++ b/_site/site/2021/02/01/Zero-shot-Learning-by-Generating-Task-specific-Adapters.html @@ -0,0 +1,117 @@ +

Introduction

+ +
    +
  • +

    The paper introduces HYPTER - a framework for zero-shot learning (ZSL) in text-to-text transformer models by training a HyperNetwork to generate task-specific adapters from task descriptions.

    +
  • +
  • +

    The focus is on in-task zero-shot learning (e.g., learning to predict an unseen class or relation) and not on cross-task learning (e.g., training on sentiment analysis and evaluating on question-answering task).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Terminology

+ +
    +
  • +

    Task - a NLP task, like classification or question answering.

    +
  • +
  • +

    Sub-task

    + +
      +
    • +

      A class/relation/question within a task.

      +
    • +
    • +

      Denotes by a tuple $(d, D)$ where $d$ is the language description while $D$ represents the subtask’s dataset.

      +
    • +
    +
  • +
+ +

Setup

+ +
    +
  • Develop ZSL approach for transfer to new subtasks within a task, using the task description available for each subtask.
  • +
+ +

Approach

+ +
    +
  • +

    HYPTER has two main parts:

    + +
      +
    • +

      Main network

      + +
        +
      • +

        A pretrained text-to-text network

        +
      • +
      • +

        Instantiated as a BERT-Base/Large

        +
      • +
      +
    • +
    • +

      HyperNetwork

      + +
        +
      • Generates the weights for adapter networks that will be plugged into the main network.
      • +
      +
    • +
    +
  • +
  • +

    HyperNetwork has two parts:

    + +
      +
    • +

      Encoder

      + +
        +
      • +

        Encodes the task description

        +
      • +
      • +

        Instantiated as a RoBERTa-Base model

        +
      • +
      +
    • +
    • +

      Decoder

      + +
        +
      • +

        Decodes the encoding into weights for multiple adapters (in parallel)

        +
      • +
      • +

        Instantiated as a Feedforward Network

        +
      • +
      +
    • +
    +
  • +
  • +

    The model trains in two phases:

    + +
      +
    • +

      Main network is trained on all the data by concatenating the task description with the input.

      +
    • +
    • +

      Adapters are trained by sampling a task from the train set while keeping the main network frozen.

      +
    • +
    +
  • +
+ +

Experiments

+ + diff --git a/_site/site/2021/02/08/Continual-learning-with-hypernetworks.html b/_site/site/2021/02/08/Continual-learning-with-hypernetworks.html new file mode 100644 index 00000000..e16efe1d --- /dev/null +++ b/_site/site/2021/02/08/Continual-learning-with-hypernetworks.html @@ -0,0 +1,120 @@ +

Introduction

+ +
    +
  • +

    The paper proposes the use of task-conditioned HyperNetworks for lifelong learning / continual learning setups.

    +
  • +
  • +

    The idea is, the HyperNetwork would only need to remember the task-conditioned weights and not the input-output mapping for all the data points.

    +
  • +
  • +

    Link to the paper

    +
  • +
  • +

    Author’s Implementation

    +
  • +
+ +

Terminology

+ +
    +
  • +

    $f$ denotes the network for the given $t^{th}$ task.

    +
  • +
  • +

    $h$ denotes the HyperNetwork that generates the weights for $f$.

    +
  • +
  • +

    $\Theta_{h}$ denotes the parameters of $h$.

    +
  • +
  • +

    $e^{t}$ denotes the input task-embedding for the $t^{th}$ task.

    +
  • +
+ +

Approach

+ +
    +
  • +

    When training on the $t^{th}$ task, the HyperNetworks generates the weights for the network $f$.

    +
  • +
  • +

    The current task loss is computed using the generated weights, and the candidate weight update ($\Delta \Theta_{h}$) is computed for $h$.

    +
  • +
  • +

    The actual parameter change is computed by the following expression:

    +
  • +
+ +

$L_{total} = L{task}(\Theta_{h}, e^{T}, X^{T}, Y^{T}) + \frac{\beta_{output}}{T-1} \sum_{t=1}^{T-1} | f_{h}(e^{t}, \Theta_{h}^*) - f_{h}(e^{(t)}, \Theta_{h} + \Delta \Theta_{h} ))|^2$

+ +
    +
  • +

    $L_{task}$ is the loss for the current task.

    +
  • +
  • +

    $(X^{T}, Y^{T})$ denotes the training datapoints for the $T^{th}$ task.

    +
  • +
  • +

    $\beta_{output}$ is a hyperparameter to control the regularizer’s strength.

    +
  • +
  • +

    $\Theta_{h}^*$ denotes the optimal parameters after training on the $T-1$ tasks.

    +
  • +
  • +

    $\Theta_{h} + \Delta \Theta_{h}$ denotes the one-step update on the current $h$ model.

    +
  • +
  • +

    In practice, the task encoding $e^{t}$ is chunked into smaller vectors, and these vectors are fed as input to the HyperNetwork.

    +
  • +
  • +

    This enables the HyperNetwork to produce weights iteratively, instead of all at once, thus helping to scale to larger models.

    +
  • +
  • +

    The paper also considers the problem of inferring the task embedding from a given input pattern.

    +
  • +
  • +

    Specifically, the paper uses task-dependent uncertainty, where the task embedding with the least predictive uncertainty is chosen as the task embedding for the given unknown task. This approach is referred to as HNET+ENT.

    +
  • +
  • +

    The paper also considers using HyperNetworks to learn the weights for a task-specific generative model. This generative model will be used to generate pseudo samples for rehearsal-based approaches. The paper considers two cases:

    + +
      +
    • +

      HNET+R where the replay model (i.e., the generative model) is parameterized using a HyperNetwork.

      +
    • +
    • +

      HNET+TIR, where an auxiliary task inference classifier is used to predict the task identity.

      +
    • +
    +
  • +
+ +

Experiments

+ +
    +
  • +

    Three setups are considered

    + +
      +
    • +

      CL1 - Task identity is given to the model.

      +
    • +
    • +

      CL2 - Task identity is not given, but task-specific heads are used.

      +
    • +
    • +

      CL3 - Task identity needs to be explicitly inferred.

      +
    • +
    +
  • +
  • +

    On the permuted MNIST task, the proposed approach outperforms baselines like Synaptic Intelligence and Online EWC, and the performance gap is more significant for larger task sequences.

    +
  • +
  • +

    Forward knowledge transfer is observed with the CIFAR datasets.

    +
  • +
  • +

    One potential limitation (which is more of a limitation of HyperNetworks) is that HyperNetworks may be harder to scale for larger models like ResNet50 or transformers, thus limiting their usefulness for lifelong learning use cases.

    +
  • +
diff --git a/_site/site/2021/02/15/When-Do-Curricula-Work.html b/_site/site/2021/02/15/When-Do-Curricula-Work.html new file mode 100644 index 00000000..485b58c1 --- /dev/null +++ b/_site/site/2021/02/15/When-Do-Curricula-Work.html @@ -0,0 +1,114 @@ +

Introduction

+ +
    +
  • +

    The paper systematically investigates when does curriculum learning help.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Implicit Curricula

+ +
    +
  • +

    Implicit curricula refers to the order in which a network learns data points when trained using stochastic gradient descent, with iid sampling of data.

    +
  • +
  • +

    When training, let us say that the model makes a correct prediction for a given datapoint in the $i^{th}$ epoch (and correct prediction in all the subsequent epochs). The $i^{th}$ epoch is referred to as the learned iteration of the datapoint (iteration in which the datapoint was learned).

    +
  • +
  • +

    The paper studied multiple models (VGG, ResNet, WideResNet, DenseNet, and EfficientNet) with different optimizers (Adam and SGD with momentum).

    +
  • +
  • +

    The resulting implicit curricula are broadly consistent within the model families, making the following discussion less dependent on the model architecture.

    +
  • +
+ +

Explicit Curricula

+ +
    +
  • When defining an explicit curriculum, three important components stand out.
  • +
+ +

Scoring Function

+ +
    +
  • +

    Maps a data point to a numerical score of difficulty.

    +
  • +
  • +

    Choices:

    + +
      +
    • +

      Loss function for a model

      +
    • +
    • +

      learned iteration

      +
    • +
    • +

      Estimated c-score - It captures a given model’s consistency to correctly predict a given datapoint’s label when trained on an iid dataset (not containing the datapoint).

      +
    • +
    +
  • +
  • +

    The three scoring functions are computed for two models on the CIFAR dataset.

    +
  • +
  • +

    The resulting six scores have a high Spearman Rank correlation. Hence for the rest of the discussion, only the c-score is used.

    +
  • +
+ +

Pacing Function

+ +
    +
  • +

    This function, denoted by $g(t)$, controls the size of the training dataset at step $t$.

    +
  • +
  • +

    At step $t$, the model would be trained on the first $g(t)$ examples (as per the ordering).

    +
  • +
  • +

    Choices: logarithmic, exponential, step, linear, quadratic, and root.

    +
  • +
+ +

Order

+ +
    +
  • +

    Order in which the data points are picked:

    + +
      +
    • +

      Curriculum - Ordering points from lowest score to highest and training on the easiest data points first.

      +
    • +
    • +

      Anti Curriculum - Ordering points from highest score to lowest and training on the hardest data points first.

      +
    • +
    • +

      Random - Randomly selecting the data points to train on.

      +
    • +
    +
  • +
+ +

Observations

+ +
    +
  • +

    The paper performed a hyperparameter sweep over 180 pacing functions and three orderings for three random seeds over the CIFAR10 and CIFAR100 datasets. For both the datasets, the best performance is obtained with random ordering, indicating that curricula did not give any benefits.

    +
  • +
  • +

    However, the curriculum is useful when the number of training iterations is small.

    +
  • +
  • +

    It also helps with noisy data training (which is simulated by randomly permuting the labels).

    +
  • +
  • +

    The observations for the smaller CIFAR10/100 dataset generalize to slightly larger datasets like FOOD101 and FOOD101N.

    +
  • +
+ diff --git a/_site/site/2021/02/22/Anatomy-of-Catastrophic-Forgetting-Hidden-Representations-and-Task-Semantics.html b/_site/site/2021/02/22/Anatomy-of-Catastrophic-Forgetting-Hidden-Representations-and-Task-Semantics.html new file mode 100644 index 00000000..7beb2a80 --- /dev/null +++ b/_site/site/2021/02/22/Anatomy-of-Catastrophic-Forgetting-Hidden-Representations-and-Task-Semantics.html @@ -0,0 +1,182 @@ +

Introduction

+ +
    +
  • +

    The paper studies the effect of catastrophic forgetting on representations in neural networks.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Techniques:

    + +
      +
    • +

      Representational Similarity Measures

      +
    • +
    • +

      Layer Freezing

      +
    • +
    • +

      Layer Reset

      +
    • +
    +
  • +
  • +

    Datasets

    + +
      +
    • +

      Split CIFAR-10

      + +
        +
      • +

        CIFAR-10 dataset is split into m (=2) tasks, where each task is a n way classification task.

        +
      • +
      • +

        The underlying network has a shared trunk with m heads, one head per task.

        +
      • +
      +
    • +
    • +

      Split CIFAR-100 Distribution Shift

      + +
        +
      • Each task requires distinguishing between n CIFAR-100 superclasses with training/test data corresponding to a subset of constituent classes.
      • +
      +
    • +
    +
  • +
  • +

    Network Architecture

    + +
      +
    • VGG, ResNet and DenseNet
    • +
    +
  • +
+ +

Questions

+ +
    +
  • +

    Are all representations (throughout the network) equally responsible for forgetting?

    + +
      +
    • +

      Higher layer (layers closer to the output) are the primary source of catastrophic forgetting.

      +
    • +
    • +

      Central Kernel Alignment (CKA) technique is used to compare the similarity between the layer representations, before and after training on the second task.

      +
    • +
    • +

      Higher layer representations change significantly when training over two tasks while the lower layer representations remain stable.

      +
    • +
    • +

      When finetuning on the second task, freezing the lower layers has only a minor effect on the accuracy of the second task.

      +
    • +
    • +

      In layer reset experiments, after training on the second task, the weights of some of the layers are reset to their values after training on the first task.

      + +
        +
      • Resetting the weights of higher layers leads to significant improvement in the performance on the first task.
      • +
      +
    • +
    +
  • +
  • +

    Do common approaches for countering catastrophic forgetting work by stabilizing the higher layers?

    + +
      +
    • +

      Yes - both EWC and replay-based approaches counter catastrophic forgetting work by stabilizing the higher layers.

      +
    • +
    • +

      This is demonstrated by showing that as the quadratic penalty for EWC (or fraction of data from replay buffer) increases (to reduce catastrophic forgetting), the representations for higher layers change less during the second task.

      +
    • +
    +
  • +
  • +

    When training over a sequence of tasks, are similar tasks more likely to be forgotten than different tasks?

    + +
      +
    • +

      Setup I

      + +
        +
      • +

        Training over a sequence of two binary classification tasks.

        +
      • +
      • +

        Task 1: Two related classes (say ship and truck)

        +
      • +
      • +

        Task 2: Two related classes, which may or may not be related to the classes for Task 1. For example, the classes could be

        + +
          +
        • +

          cat and horse (not related to classes of the first task)

          +
        • +
        • +

          plane and car (related to the classes of the first task)

          +
        • +
        +
      • +
      • +

        Training over semantically similar tasks (here plane and car) leads to less forgetting.

        +
      • +
      +
    • +
    • +

      Setup II

      + +
        +
      • +

        Training over a sequence of two classification tasks.

        +
      • +
      • +

        Task 1: Four classes where the classes can be grouped into two groups (say deer, dog, ship and truck)

        +
      • +
      • +

        Task 2: Two related classes, which may be related to group 1 or group 2. For example, the classes could be two animals or two objects.

        +
      • +
      • +

        After training on the second task, classes (from Task 1), which are in the different group as classes from Task 2, are forgotten less.

        +
      • +
      +
    • +
    • +

      Conclusion

      + +
        +
      • +

        Task representational similarity is a function of both underlying data and optimization procedure.

        +
      • +
      • +

        Forgetting is most severe for task representations of intermediate similarity.

        +
      • +
      • +

        Representational similarity is necessary but not a sufficient condition for forgetting.

        +
      • +
      +
    • +
    +
  • +
  • +

    How does catastrophic forgetting change as the task similarity changes?

    + +
      +
    • +

      If the model learns different representations for dissimilar tasks, increasing dissimilarity can help to avoid forgetting.

      +
    • +
    • +

      When training the two-task, two-class (per task) CIFAR-10 setup with an “others” class (classes not already used in the setup), the forgetting is reduced.

      +
    • +
    +
  • +
diff --git a/_site/site/2021/03/01/Ad-Click-Prediction-a-View-from-the-Trenches.html b/_site/site/2021/03/01/Ad-Click-Prediction-a-View-from-the-Trenches.html new file mode 100644 index 00000000..612ae485 --- /dev/null +++ b/_site/site/2021/03/01/Ad-Click-Prediction-a-View-from-the-Trenches.html @@ -0,0 +1,144 @@ +

Introduction

+ +
    +
  • +

    The paper presents case studies from the experience of deploying an ad click-through rate (CTR) prediction model at Google.

    +
  • +
  • +

    The paper focuses on themes related to memory footprint, performance analysis, calibration, confidence in the predictions, and feature engineering.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

System Overview

+ +
    +
  • +

    Features (corresponding to a given ad) include search query and the metadata in the ad. The features are very sparse.

    +
  • +
  • +

    Single layer, regularized Logistic Regression model is trained with Online Gradient Descent (same as Stochastic Gradient Descent, but in the online setting).

    +
  • +
  • +

    From a memory perspective, it is important to minimize the size of the final model.

    +
  • +
  • +

    Adding just the L1 penalty is not sufficient to produce weights that are precisely equal to 0.

    +
  • +
  • +

    “Follow The (Proximally) Regularized Leader” algorithm or FTRL-Proximal algorithm is used to learn sparse models without losing on the accuracy.

    +
  • +
  • +

    Using per-coordinate learning rates improves the performance at the cost of memory as both the sum of gradients and the sum of the square of gradients are tracked for each feature.

    + +
      +
    • +

      In practice, some of the cost can be alleviated by approximating that all the events containing a given feature have the same probability.

      +
    • +
    • +

      In such a case, the sum of the square of gradients can be approximated using the counts of positive and negative events alone.

      +
    • +
    +
  • +
  • +

    Some memory overhead can be reduced based on the following observation: the vast majority of features are extremely rare. Hence, it is not necessary to track the statistics for such rare features.

    + +
      +
    • +

      However, in an online setting, it is not known upfront as to which features will be sparse.

      +
    • +
    • +

      The paper proposes to use probabilistic feature inclusion - a feature is added to the model with probability $p$. Once it is added, the feature is not removed.

      +
    • +
    • +

      An alternative approach is to use a rolling set of counting Bloom filters to check if a feature has appeared at least $n$ times in training. Bloom filters are probabilistic data structures and can return false positives.

      +
    • +
    +
  • +
  • +

    Memory can also be saved by using fewer bits for encoding weights.

    + +
      +
    • +

      Most of the weight coefficients lie in the range $(-2, 2)$, and a $16-$ bit encoding is used in place of $32$ or $64$ bit encoding.

      +
    • +
    • +

      This quantization approach needs to account for roundoff problems. The fix is easy to implement.

      +
    • +
    +
  • +
  • +

    When training many models with similar hyperparameters, per-model learning rate counters can be replaced by statistics shared by all the models, thus reducing memory footprint.

    +
  • +
  • +

    A Single Value Structure is used to reduce the memory footprint when evaluating a very large set of model variants that differ only in addition/removal of a small subset of features.

    + +
      +
    • +

      All the models, that use a feature, share a single value structure corresponding to the feature. This reduces the memory overhead by order of magnitude.

      +
    • +
    • +

      During the update, each model computes the weight updates corresponding to all the features that it is using. The updated weight is averaged across all the models and used to update the single value structure.

      +
    • +
    +
  • +
  • +

    Since CTR datasets are generally highly imbalanced, the training data (for the negative class) can be subsampled to reduce the amount of data to train over. The loss component (corresponding to negative class) can be appropriately scaled up.

    +
  • +
  • +

    Metrics

    + +
      +
    • +

      Offline metrics like AucLoss (1 - AUC), Log Loss, Squared Error

      +
    • +
    • +

      Online loss is computed on the new training data (new incoming traffic) before training on it.

      +
    • +
    +
  • +
  • +

    The confidence in the model’s prediction is estimated using a heuristic called uncertainty score. It can be measured using the dot product of the feature and the vector of learning rates.

    + +
      +
    • +

      The idea is that the learning rates already maintain a notion of uncertainty.

      +
    • +
    • +

      Features for which the learning rate is high are the features for which uncertainty is also high.

      +
    • +
    +
  • +
  • +

    Calibrating Predictions

    + +
      +
    • +

      The calibration can be improved by applying correction functions $\tau_d(p)$ where $p$ is the predicted CTR, and $d$ is an element of a partition of the training data.

      +
    • +
    • +

      $\tau$ can be modeled as $\gamma^{\kappa}$ where $\gamma$ and $\kappa$ are learned using Poisson regression.

      +
    • +
    +
  • +
  • +

    Unsuccessful Experiments

    + +
      +
    • +

      Aggressive feature hashing was tried to reduce the memory overhead. However, it leads to a significant loss in performance.

      +
    • +
    • +

      Using dropout did not help, probably because the features are sparse.

      +
    • +
    • +

      Using feature bagging hurt the AucLoss.

      +
    • +
    • +

      Feature vector normalization did not improve performance, probably because of per-coordinate learning rates and regularization.

      +
    • +
    +
  • +
diff --git a/_site/site/2021/03/08/Practical-Lessons-from-Predicting-Clicks-on-Ads-at-Facebook.html b/_site/site/2021/03/08/Practical-Lessons-from-Predicting-Clicks-on-Ads-at-Facebook.html new file mode 100644 index 00000000..0b8c6193 --- /dev/null +++ b/_site/site/2021/03/08/Practical-Lessons-from-Predicting-Clicks-on-Ads-at-Facebook.html @@ -0,0 +1,180 @@ +

Introduction

+ +
    +
  • +

    The paper describes several design choices for developing a model for predicting user response (clicks) on ads.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Experimental Setup

+ +
    +
  • +

    The model is trained/evaluated on offline data.

    +
  • +
  • +

    Evaluation metrics:

    + +
      +
    • +

      Normalized Cross-Entropy (or Normalized Entropy, NE)

      + +
        +
      • +

        Defined as the predictive log-loss per impression, divided by the entropy of the background CTR (click-through rate).

        +
      • +
      • +

        Background CTR is the average empirical CTR of the training data.

        +
      • +
      • +

        Lower normalized cross-entropy is better.

        +
      • +
      • +

        The normalization term is important to make the metric insensitive to the background CTR. Otherwise, the log loss can easily be made low when background CTR is close to 0 or 1.

        +
      • +
      • +

        NE can also be written as $RIG - 1$, where $RIG$ is the Relative Information Gain.

        +
      • +
      +
    • +
    • +

      Calibration

      + +
        +
      • Ratio of average estimated CTR and empirical CTR.
      • +
      +
    • +
    • +

      Area-Under-ROC (AUC) is a good metric for measuring ranking quality (among ads). However, it is not used as a metric to avoid over-delivery or under-delivery of ads.

      +
    • +
    +
  • +
+ +

Implementation Details

+ +
    +
  • +

    Feature Transformation

    + +
      +
    • +

      A given add impression, $e$, is transformed into a $n-$dimensional vector, $x$, where the $i^{th}$ index denotes the value of the $i^{th}$ categorical feature.

      +
    • +
    • +

      Continous features are binned, and the bin index is used as a categorical feature, thus applying a non-linear transformation to the features.

      +
    • +
    • +

      Categorical features that are tuple-like (i.e., have a tuple of values) can be converted into new categorical features by taking a cartesian product.

      +
    • +
    • +

      Boosted decision trees can be used to implement the previous two transformations in one go.

      + +
        +
      • +

        Each tree is used as a categorical feature that takes the value of the index of the leaf node than an ad maps to.

        +
      • +
      • +

        The paper used the Gradient Boosting Machine with the $L_2-$TreeBoost algorithm.

        +
      • +
      • +

        Using the tree feature transformation improves the Normalized Cross-Entropy by $3.4\%$.

        +
      • +
      +
    • +
    +
  • +
  • +

    Model

    + +
      +
    • +

      Logistic Regression (LR) or Bayesian online learning scheme for probit regression (BOPR) algorithms are used for training a linear classifier model.

      +
    • +
    • +

      While both LR and BOPR models provide similar performance, the LR model is half the BOPR model’s size and faster for performing training/inference.

      +
    • +
    +
  • +
+ +

Role of Data Freshness

+ +
    +
  • +

    When a model is trained on the data from a particular day and evaluated on data from the subsequent days, the model’s performance degrades as the delay between training and test set increases.

    +
  • +
  • +

    This highlights the importance of the freshness of the training data.

    +
  • +
  • +

    One straightforward approach can be to train the model every day.

    +
  • +
  • +

    Alternatively, the linear classifier can be trained using online learning, while the boosted decision tree can still be trained daily.

    +
  • +
  • +

    Different choices for setting the learning rate (for online training of linear classifier) are compared, and the per-coordinate learning rate is found to perform best in practice.

    +
  • +
+ +

Generating Real-Time Training Data

+ +
    +
  • +

    An “online joiner” system is used to generate real-time training data for the linear classifier.

    +
  • +
  • +

    The challenging part is, while there are data points with a “positive” label (i.e., the user clicked on the ad), there are no datapoints with a “negative” label (since there is no “no-click” button that the user can click).

    +
  • +
  • +

    An impression is considered to have the “no-click” label if the user does not click on the ad within a (long) time window of seeing the ad.

    +
  • +
  • +

    Too short a time window could mislabel some impressions, while too long a time window will delay the real-time training data.

    +
  • +
  • +

    The online joiner performs a distributed stream-to-stream join on the stream of ad impressions and stream of ad clicks using a HashQueue.

    +
  • +
  • +

    A HashQueue:

    + +
      +
    • +

      comprises of a First-In-First-Out queue as a buffer window and a hash map for fast random access to label impressions.

      +
    • +
    • +

      supports three operations on key-value pairs: enqueue, dequeue, and lookup.

      +
    • +
    +
  • +
+ +

Memory and Latency

+ +
    +
  • +

    Increasing the number of boosting trees shows diminishing returns, and most of the improvements come from the first 500 trees.

    +
  • +
  • +

    Top 10 features account for half of the total feature importance, while the last 300 features add less than 1% feature importance.

    +
  • +
  • +

    Features in the boosting model can be broadly classified as contextual or historical.

    +
  • +
  • +

    Historical feature provides much more explanatory power than the contextual features through contextual features are helpful to handle the cold start problem.

    +
  • +
  • +

    Models trained with just the contextual features rely more heavily on data freshness than models trained with just the historical features.

    +
  • +
  • +

    Uniform subsampling and negative downsampling techniques are used to limit the amount of training data.

    +
  • +
  • +

    In the case of negative downsampling, the model needs to be re-calibrated as well.

    +
  • +
diff --git a/_site/site/2021/03/15/The-Tail-at-Scale.html b/_site/site/2021/03/15/The-Tail-at-Scale.html new file mode 100644 index 00000000..b62dd5f1 --- /dev/null +++ b/_site/site/2021/03/15/The-Tail-at-Scale.html @@ -0,0 +1,184 @@ +

Introduction

+ +
    +
  • +

    The paper presents some causes for (temporary) high-latency episodes in large-scale online systems and techniques to mitigate their impact so that the tail of latency distribution remains short.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Why does variability in response time exist

+ +
    +
  • +

    Shared resources between processes on the same node

    +
  • +
  • +

    Background processes (daemons) could use cause a momentary spike in resource usage.

    +
  • +
  • +

    Processes running on different nodes may contend for global resources like shared file systems.

    +
  • +
  • +

    Maintenance activities like disk compaction or garbage collection.

    +
  • +
  • +

    Others like queueing, power limits, or energy management.

    +
  • +
  • +

    In the case of large-scale systems, the component-level variability is further amplified.

    +
  • +
+ +

Reducing Component Variability

+ +
    +
  • +

    Use differentiated service classes to prioritize user requests over non-interactive requests.

    +
  • +
  • +

    Reduce head-of-line blocking by breaking long-running requests into smaller requests.

    +
  • +
  • +

    Synchronize maintenance jobs across nodes to minimize the window for high latency.

    +
  • +
  • +

    Caching generally does not help to address tail latency.

    +
  • +
+ +

Adapting to Latency Variability

+ +
    +
  • +

    Two categories of adaptation approaches

    + +
      +
    • +

      Within Request Short-Term Adaptations

      + +
        +
      • +

        These approaches are more relevant for services that perform many read queries on loosely consistent datasets.

        +
      • +
      • +

        Hedged Request

        + +
          +
        • +

          Send the request to multiple replicas, and once one of the replicas returns the result, cancel the other requests.

          +
        • +
        • +

          In practice, start by sending the request to only one replica. Send the secondary requests if the first request is outstanding for more than $95^{th}$ percentile of expected latency.

          +
        • +
        • +

          This introduces an additional $5\%$ load while substantially shortening the latency tail.

          +
        • +
        • +

          This approach work because often, the cause of latency is not the query itself but other factors like overloaded nodes.

          +
        • +
        +
      • +
      • +

        Tied Request

        + +
          +
        • +

          Hedged request approach makes a tradeoff regarding how long to wait before initiating requests to other replicas. The sooner the request is made, the lower should be the latency in serving the request, but more will be the overall load in the system.

          +
        • +
        • +

          The load in the system can be reduced by “tieing” requests (sent to different replicas) so that as soon as one replica starts processing the request, it can notify the other replicas, which could drop the request or deprioritize it.

          +
        • +
        • +

          In practice, “tieing” requests means that each replica has the identity of other replicas which may execute the request.

          +
        • +
        • +

          Note that there is a short window (of the average network message delay) when multiple replicas could start executing the request. This can be mitigated if the client (issuing the requests) introduces a delay to twice the average network message delay.

          +
        • +
        +
      • +
      • +

        Submit the request to the least loaded replica

        + +
          +
        • This is less effective for reasons like the load on a replica can change after the request is made but before it is executed.
        • +
        +
      • +
      +
    • +
    • +

      Cross-Request Long-Term Adaptations

      + +
        +
      • +

        These approaches are more relevant for situations where different services have different throughput.

        +
      • +
      • +

        Micro-partitions

        + +
          +
        • +

          Generate more paritions than the number of nodes.

          +
        • +
        • +

          The partitions can be dynamically assigned to machines to ensure proper load balancing.

          +
        • +
        • +

          In case of machine failure, many nodes can be used to quickly re-create the micro-partitions instead of waiting on one machine to read one single large partition.

          +
        • +
        +
      • +
      • +

        Selective Replication

        + +
          +
        • With micro-partitioning, replicas for micro-partitions can be created ahead of time to achieve good load balancing.
        • +
        +
      • +
      • +

        Latency induced probation

        + +
          +
        • In some cases, removing a slow node can improve the overall latency of the system. The probated node can be re-incorporated when its latency improves.
        • +
        +
      • +
      +
    • +
    +
  • +
  • +

    Large Information Retrieval Systems

    + +
      +
    • +

      In such systems, speed can be more critical than the quality of the result.

      +
    • +
    • +

      The system should return a “good enough” result that is available with low latency instead of waiting for the “best result” that is available with high latency.

      +
    • +
    • +

      In some cases, a request could trigger an unexpected code path or cause some other exception that could slow down the entire system.

      +
    • +
    • +

      In such cases, the canary request technique can be used where the system sends the request initially to only 1 or 2 nodes. The request is sent over to the other nodes only after receiving a successful response from the initial nodes.

      +
    • +
    +
  • +
  • +

    Requests that update state are easier to handle for several reasons:

    + +
      +
    • +

      The scale of latency-critical modifications is generally small.

      +
    • +
    • +

      The update can be performed asynchronously after responding to the user.

      +
    • +
    • +

      Quorum-based approaches (often used for ensuring consistent updates) are inherently tail-tolerant.

      +
    • +
    +
  • +
diff --git a/_site/site/2021/03/22/Deep-Neural-Networks-for-YouTube-Recommendations.html b/_site/site/2021/03/22/Deep-Neural-Networks-for-YouTube-Recommendations.html new file mode 100644 index 00000000..dfc9122d --- /dev/null +++ b/_site/site/2021/03/22/Deep-Neural-Networks-for-YouTube-Recommendations.html @@ -0,0 +1,151 @@ +

Introduction

+ +
    +
  • +

    The paper describes YouTube’s deep learning-based recommendation system.

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Challenges

+ +
    +
  • +

    Scale - Very large number of users and videos.

    +
  • +
  • +

    Freshness - Very large number of videos uploaded every hour. The recommendation system should take these new videos into account as well.

    +
  • +
  • +

    Noise - User satisfaction needs to be modeled from noisy implicit feedback signal as the explicit signal is very sparse.

    +
  • +
+ +

System Overview

+ +
    +
  • +

    Two neural networks: one for candidate generation and another one for ranking.

    +
  • +
  • +

    Metrics

    + +
      +
    • +

      Offline metrics like precision, recall, ranking loss

      +
    • +
    • +

      A/B testing via live experiments

      +
    • +
    +
  • +
+ +

Candidate Generation

+ +
    +
  • +

    Input: events from a user’s YouTube activity history.

    +
  • +
  • +

    Output: small subset (hundreds) of videos.

    +
  • +
  • +

    Approach:

    + +
      +
    • +

      Recommendation is modeled as extreme multiclass classification.

      +
    • +
    • +

      Predict the video (from a corpus) that a user will watch at a given time.

      +
    • +
    • +

      The neural network’s task is to learn useful user embeddings, given the user’s context and history.

      +
    • +
    • +

      For each positive class (relevant video), negative classes (non-relevant videos) are sampled from the video corpus.

      +
    • +
    +
  • +
  • +

    Model Architecture

    + +
      +
    • +

      A feedforward network with input as user embeddings and context embeddings (watch history).

      +
    • +
    • +

      Watch history is a variable-length sequence of video ids, where each video id is mapped to an embedding.

      +
    • +
    • +

      The sequence of video ids is mapped to a sequence of embeddings, and this sequence is averaged to obtain fixed-sized embedding.

      +
    • +
    • +

      Additional signals like demographic features and search query embeddings can be added along with the context embeddings.

      +
    • +
    • +

      The age of a video is also used as a feature during training to account for the freshness of the content. This feature is set to zero (or slightly negative) during inference.

      +
    • +
    +
  • +
  • +

    Other Insights

    + +
      +
    • +

      Training examples are generated from all YouTube watches, including the watches from the videos embedded on other sites, to surface new content.

      +
    • +
    • +

      Generating the same number of training examples per user is important to avoid a small set of active users from dominating the model training.

      +
    • +
    • +

      Predicting a user’s next watch leads to better results than predicting a randomly held-out watch. This can be attributed to the general consumption pattern of videos (e.g., episodes are usually watched in order).

      +
    • +
    +
  • +
+ +

Ranking

+ +
    +
  • +

    Input: list of candidate videos to rank from.

    +
  • +
  • +

    Output: score for each video.

    +
  • +
  • +

    Approach

    + +
      +
    • A feedforward network (similar to candidate generation model) trained using logistic regression loss.
    • +
    +
  • +
  • +

    Feature representation

    + +
      +
    • +

      Different types of features: categorical vs. continuous, univalent vs. multivalent, describes video vs. describes user or context.

      +
    • +
    • +

      Important signals include user’s interaction with the video (or similar videos), which source/channel added the video to the candidate set.

      +
    • +
    • +

      Embeddings are shared across features. For example, the representation for a video id remains the same, irrespective of whether it is being used for representing the “video to recommend” or the “last seen video.”

      +
    • +
    • +

      Feature normalization and transformations like exponents (square or square root) for continuous variables improve the performance.

      +
    • +
    +
  • +
  • +

    To model the expected watch time, the logistic regression loss is weighted by the observed watch time. For example, if a video was watched, its weight is given by the observed watch time, and if the video was not watched, its weight is set to 1.

    +
  • +
  • +

    In practice, this means that the logistic regression model learns odds that approximate the expected watch time of the video.

    +
  • +
diff --git a/_site/site/2021/03/29/Synthesized-Policies-for-Transfer-and-Adaptation-across-Tasks-and-Environments.html b/_site/site/2021/03/29/Synthesized-Policies-for-Transfer-and-Adaptation-across-Tasks-and-Environments.html new file mode 100644 index 00000000..1289dc54 --- /dev/null +++ b/_site/site/2021/03/29/Synthesized-Policies-for-Transfer-and-Adaptation-across-Tasks-and-Environments.html @@ -0,0 +1,192 @@ +

Introduction

+ +
    +
  • +

    The paper studies transfer learning in RL, focusing on simultaneous transfer across both tasks and environments.

    +
  • +
  • +

    The key idea is to learn task and environment embeddings and compose them using a meta-rule, and the proposed approach is called SYNPO (Synthesized Policies).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Setup

+ +
    +
  • +

    Three settings considered:

    + +
      +
    • +

      S1: Transfer to a new (environment, task) pair when the agent has been trained on the environment and the task before (but not simultaneously).

      +
    • +
    • +

      S2: Transfer to a new (environment, task) pair where either the environment or the task is not seen previously.

      +
    • +
    • +

      S3: Transfer to a new (environment, task) pair where neither the environment nor the task is seen previously.

      +
    • +
    +
  • +
  • +

    In the second and third settings, the agent is allowed to collect some data in the new environment or task.

    +
  • +
  • +

    The (environment, task) combinations that the agent has seen during training are referred to as seen combinations, while the remaining combinations are referred to as the unseen combinations.

    +
  • +
  • +

    The key idea is to:

    + +
      +
    • +

      learn embeddings of environments and tasks

      +
    • +
    • +

      use these embeddings to compose a policy (parameterized as the linear combination of the policy basis).

      +
    • +
    +
  • +
  • +

    A disentanglement objective is used to decouple the task and environment embedding.

    +
  • +
+ +

Policy Composition

+ +
    +
  • +

    Given an (environment, task) pair $z = (\epsilon, \tau)$, the policy is given as $\pi_z(a|s) \propto exp(\psi_s^TU(e_{\epsilon}, e_{\tau})\phi_{a} + b_{\pi} )) $.

    +
  • +
  • +

    Here $b_{\pi}$ is a scalar bias, $\psi_{s}$ and $\phi_{a}$ are state and action representations, $U$ is parameterized as the linear comination of $K$ basis matrices $\Theta_k$

    +
  • +
  • +

    $U(e_{\epsilon}, e_{\tau}) = \sum_{k=1}^{K}\alpha_k(e_{\epsilon}, e_{\tau})\Theta_k$.

    +
  • +
  • +

    The basis matrices (denoted by $\Theta_k$) are shared across tasks while the coefficients ($\alpha_k$) are specific to the (environment, task) pair.

    +
  • +
  • +

    During training, the agent also predicts rewards using the same set of basis but different coefficients.

    +
  • +
+ +

Disentangling environment and task embeddings

+ +
    +
  • +

    Given an (environment, task) pair, the agent is trained to decode the environment (and task) given the agent’s trajectory.

    +
  • +
  • +

    The sequence of state-action pairs (in the trajectory) is mapped to a sequence of state-action representations, given by $\psi_s^T\Theta_k\phi_{a}$

    +
  • +
+ +

Experiment Setup

+ +
    +
  • The agent is trained (and evaluated) on imitation learning (mostly) and reinforcement learning setup.
  • +
+ +

Environments

+ +
    +
  • +

    GRIDWORLD

    + +
      +
    • +

      Twenty $16 \times 16$ gird-aligned mazes that are similar in appearance but differ in topology.

      +
    • +
    • +

      The task is to collect colored blocks in a given order. In each task, the starting position of the agent and the position of the blocks is randomized.

      +
    • +
    • +

      Each environment has 20 tasks, leading to a total of 400 (environment, task) combinations.

      +
    • +
    +
  • +
  • +

    THOR

    + +
      +
    • +

      This is a 3D simulator where the agent is placed in indoor photo-realistic scenes.

      +
    • +
    • +

      The task is the search for objects and perform actions like “put cabbage on the fridge.”

      +
    • +
    • +

      The setup uses 19 scenes (environments), with each environment comprising of 21 tasks.

      +
    • +
    +
  • +
+ +

Baselines

+ +
    +
  • +

    MLPs that concatenate state, environment embeddings, and task embedding.

    +
  • +
  • +

    Successor feature model

    +
  • +
  • +

    Module Network

    +
  • +
  • +

    Multi-task Learning where the distinction between the environments is ignored.

    +
  • +
+ +

Results

+ +
    +
  • +

    GRIDWORLD

    + +
      +
    • +

      In the first setting (S1)

      + +
        +
      • +

        SYNPO outperforms all the baselines.

        +
      • +
      • +

        As the agent is trained on more (environment, task) combinations, its performance on the unseen combinations improves. This trend saturates when the seem/total ratio reaches about 0.4 (i.e., training on 40% of all the combinations).

        +
      • +
      • +

        Task disentanglement is more important than environment disentanglement.

        +
      • +
      +
    • +
    • +

      In the second and third setting (S2 and S3)

      + +
        +
      • +

        The agent uses one demonstration from each test pair to finetune the embeddings.

        +
      • +
      • +

        S2 is an easier setting than S3.

        +
      • +
      • +

        Transfer learning across tasks is easier than transfer learning across environments.

        +
      • +
      +
    • +
    +
  • +
  • +

    THOR

    + +
      +
    • SYNPO outperforms all the baselines on both seen and unseen combinations.
    • +
    +
  • +
+ diff --git a/_site/site/2023/02/10/Toolformer-Language-Models-Can-Teach-Themselves-to-Use-Tools.html b/_site/site/2023/02/10/Toolformer-Language-Models-Can-Teach-Themselves-to-Use-Tools.html new file mode 100644 index 00000000..ca331af6 --- /dev/null +++ b/_site/site/2023/02/10/Toolformer-Language-Models-Can-Teach-Themselves-to-Use-Tools.html @@ -0,0 +1,220 @@ +

Introduction

+ +
    +
  • +

    The paper presents Toolformer, a language model that uses simple APIs to use external tools (calculator, QA system, search engine, translation system, and calendar).

    +
  • +
  • +

    Link to the paper

    +
  • +
+ +

Approach

+ +
    +
  • +

    Starting with a language model, M, the goal is to enable the language model to use tools by invoking API calls.

    +
  • +
  • +

    An API call is denoted by the tuple $c =$ (api_name, api_input). It can be linearized as $e(c) =$ [api_name(api_input)$]$ or as $e(c, r) = [$api_name(api_input) $ -> r]$ where $r$ denotes the result of the API.

    +
  • +
  • +

    The given dataset of plain text, $C$, is converted into a dataset $C*$ augmented with the API calls using a three-step process.

    +
  • +
  • +

    In the first step, a position ($i$) and API call candidates (for the position $i$) are sampled.

    + +
      +
    • +

      Positions are sampled by (i) computing the probability that M assigns to starting an API call for each position and (ii) retaining the top-$k$ positions with a probability greater than a threshold value.

      +
    • +
    • +

      For each of the sampled positions (say $i$), API calls are sampled by concatenating a prompt to the tokens till index $i$ and sampling from the model M. Examples that do not generate the “end of the API” token (i.e.,”]”) are discarded.

      +
    • +
    +
  • +
  • +

    In the second step, the API calls are executed to obtain response $r$ (text sequence).

    + +
      +
    • API calls are filtered using the following criteria: if providing M with both the input and the output of the API makes it easier for M to predict the future token, compared to not using the API call at all or just using the input to the API, then the API call is helpful for M, and the example should be retained.
    • +
    +
  • +
  • +

    In the last step, the remaining API calls are merged to obtain the augmented dataset $C*$ that is used for finetuning M.

    +
  • +
  • +

    Note that $C*$ contains $C$, so M is finetuned on the original dataset and examples where a tool is helpful.

    +
  • +
  • +

    During inference, the model is used for decoding in the usual way. Decoding is stopped when it produces the “->” token, and the corresponding API is used to generate the response. The decoding process (using the model) resumes with the API output appended to the decoded text.

    +
  • +
+ +

Tools

+ +
    +
  • +

    There are two constraints on the tools: (i) their input and output should be expressible as text, and (ii) few demonstrations can be obtained from the tools. The second constraint means that the tool should be useable or accessible.

    +
  • +
  • +

    The paper considered the following tools: a question-answering system, a Wikipedia search engine, a calculator, a calendar, and a machine translation system. Of these, only the calculator and calendar are non-neural network tools.

    +
  • +
+ +

Experiments

+ +
    +
  • +

    Subset of CCNet is used as the language modeling dataset.

    +
  • +
  • +

    GPT-J is used as the language model.

    +
  • +
  • +

    For finetuning, the batch size is 128, the learning rate is 1e-5, and a linear warmup for the first 10% of training is used.

    +
  • +
  • +

    Following models are compared:

    + +
      +
    • +

      GPT-J: Regular GPT-J model without any finetuning.

      +
    • +
    • +

      GPT-J + CC: GPT-J finetuned on $C$ without any API calls.

      +
    • +
    • +

      Toolformer, i.e. GPT-J finetuned on $C*$.

      +
    • +
    • +

      Toolformer with API calls disabled during training.

      +
    • +
    • +

      OPT 66B

      +
    • +
    • +

      GPT-3

      +
    • +
    +
  • +
  • +

    The models are evaluated in the prompted zero-shot setup, where models are instructed to solve a task without any in-context examples.

    +
  • +
  • +

    One difference from the standard greedy decoding is that the API call is used whenever it is one of the top-10 most likely next tokens. This is done to increase the use of API calls.

    +
  • +
  • +

    Evaluation Tasks

    + +
      +
    • +

      SQuAD, GoogleRE, and T-REx subsets of the LAMA benchmark where the model has to complete a short statement with a missing fact.

      + +
        +
      • +

        Since LAMA questions are based on Wikipedia, Toolformer isn’t allowed to use Wikipedia search.

        +
      • +
      • +

        The evaluation criteria is to check if the correct word is among the first five words predicted by the model.

        +
      • +
      • +

        Toolformer uses the question-answering tool for most cases, outperforming all the baselines.

        +
      • +
      +
    • +
    • +

      Math Dataset

      + +
        +
      • +

        eSDiv, SVAMP, and MAWPS benchmarks.

        +
      • +
      • +

        The first number predicted by the model is considered to be the output.

        +
      • +
      • +

        Toolformer uses the calculator tool for most cases, thereby outperforming all the baselines.

        +
      • +
      +
    • +
    • +

      Question Answering

      + +
        +
      • +

        Web Questions, Natural Questions, and TriviaQA datasets.

        +
      • +
      • +

        The evaluation criteria is to check if the correct word is among the first 20 words predicted by the model.

        +
      • +
      • +

        Question Answering tool is disabled for this setup.

        +
      • +
      • +

        Toolformer uses the Wikipedia tool for most cases, thereby outperforming all the baselines other than the much larger GPT-3 model.

        +
      • +
      +
    • +
    • +

      Multilingual Question Answering

      + +
        +
      • +

        MLQA benchmark.

        +
      • +
      • +

        The evaluation criteria is to check if the correct word is among the first ten words predicted by the model.

        +
      • +
      • +

        Toolformer uses the translation tool for most of the questions, with questions in Hindi being an exception.

        +
      • +
      • +

        However, Toolformer does not consistently outperform the GPT-J baseline, likely because, for some languages, finetuning on CCNet could hurt performance.

        +
      • +
      +
    • +
    • +

      Temporal Datasets

      + +
        +
      • +

        TEMPLAMA (cloze style queries where the answer changes with time) and DATESET (dataset generated through a series of templates and populated with random dates/durations).

        +
      • +
      • +

        While Toolformer outperforms the baselines for both datasets, it relies on the Wikipedia search and Question Answering tools (and not the calendar tool) for the LAMA dataset. On the DATESET dataset, it uses the calendar tool in the majority.

        +
      • +
      +
    • +
    • +

      Language Modeling

      + +
        +
      • +

        WikiText and a subset of 10,000 randomly selected documents from CCNet (not used during training of M).

        +
      • +
      • +

        Training on $C*$ does not increase perplexity (compared to training on C). In this experiment, the API calls are disabled during inference.

        +
      • +
      +
    • +
    +
  • +
  • +

    Varying the size of the underlying models show that the ability to use tools emerges only around 755M parameters.

    +
  • +
+ +

Future Work

+ +
    +
  • +

    Extending Toolformer to chain the use of tools and use tools interactively.

    +
  • +
  • +

    In some cases, the use of tools is very sample-inefficient.

    +
  • +
  • +

    Decision to use a tool does not account for the cost of using the tool.

    +
  • +
diff --git a/_site/site/LICENSE.md b/_site/site/LICENSE.md new file mode 100755 index 00000000..af1b0ec7 --- /dev/null +++ b/_site/site/LICENSE.md @@ -0,0 +1,9 @@ +# Released under MIT License + +Copyright (c) 2014 Mark Otto. + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/_site/site/README.md b/_site/site/README.md new file mode 100755 index 00000000..b6c7d402 --- /dev/null +++ b/_site/site/README.md @@ -0,0 +1,134 @@ +# Lanyon + +Lanyon is an unassuming [Jekyll](http://jekyllrb.com) theme that places content first by tucking away navigation in a hidden drawer. It's based on [Poole](http://getpoole.com), the Jekyll butler. + +![Lanyon](https://f.cloud.github.com/assets/98681/1825266/be03f014-71b0-11e3-9539-876e61530e24.png) +![Lanyon with open sidebar](https://f.cloud.github.com/assets/98681/1825267/be04a914-71b0-11e3-966f-8afe9894c729.png) + + +## Contents + +- [Usage](#usage) +- [Options](#options) + - [Sidebar menu](#sidebar-menu) + - [Themes](#themes) + - [Reverse layout](#reverse-layout) +- [Development](#development) +- [Author](#author) +- [License](#license) + + +## Usage + +Lanyon is a theme built on top of [Poole](https://github.com/poole/poole), which provides a fully furnished Jekyll setup—just download and start the Jekyll server. See [the Poole usage guidelines](https://github.com/poole/poole#usage) for how to install and use Jekyll. + + +## Options + +Lanyon includes some customizable options, typically applied via classes on the `` element. + + +### Sidebar menu + +Create a list of nav links in the sidebar by assigning each Jekyll page the correct layout in the page's [front-matter](http://jekyllrb.com/docs/frontmatter/). + +``` +--- +layout: page +title: About +--- +``` + +**Why require a specific layout?** Jekyll will return *all* pages, including the `atom.xml`, and with an alphabetical sort order. To ensure the first link is *Home*, we exclude the `index.html` page from this list by specifying the `page` layout. + + +### Themes + +Lanyon ships with eight optional themes based on the [base16 color scheme](https://github.com/chriskempson/base16). Apply a theme to change the color scheme (mostly applies to sidebar and links). + +![Lanyon with red theme](https://f.cloud.github.com/assets/98681/1825270/be065110-71b0-11e3-9ed8-9b8de753a4af.png) +![Lanyon with red theme and open sidebar](https://f.cloud.github.com/assets/98681/1825269/be05ec20-71b0-11e3-91ea-a9138ef07186.png) + +There are eight themes available at this time. + +![Available theme classes](https://f.cloud.github.com/assets/98681/1817044/e5b0ec06-6f68-11e3-83d7-acd1942797a1.png) + +To use a theme, add any one of the available theme classes to the `` element in the `default.html` layout, like so: + +```html + + ... + +``` + +To create your own theme, look to the Themes section of [included CSS file](https://github.com/poole/lanyon/blob/master/public/css/lanyon.css). Copy any existing theme (they're only a few lines of CSS), rename it, and change the provided colors. + + +### Reverse layout + +![Lanyon with reverse layout](https://f.cloud.github.com/assets/98681/1825265/be03f2e4-71b0-11e3-89f1-360705524495.png) +![Lanyon with reverse layout and open sidebar](https://f.cloud.github.com/assets/98681/1825268/be056174-71b0-11e3-88c8-5055bca4307f.png) + +Reverse the page orientation with a single class. + +```html + + ... + +``` + + +### Sidebar overlay instead of push + +Make the sidebar overlap the viewport content with a single class: + +```html + + ... + +``` + +This will keep the content stationary and slide in the sidebar over the side content. It also adds a `box-shadow` based outline to the toggle for contrast against backgrounds, as well as a `box-shadow` on the sidebar for depth. + +It's also available for a reversed layout when you add both classes: + +```html + + ... + +``` + +### Sidebar open on page load + +Show an open sidebar on page load by modifying the `` tag within the `sidebar.html` layout to add the `checked` boolean attribute: + +```html + +``` + +Using Liquid you can also conditionally show the sidebar open on a per-page basis. For example, here's how you could have it open on the homepage only: + +```html + +``` + +## Development + +Lanyon has two branches, but only one is used for active development. + +- `master` for development. **All pull requests should be to submitted against `master`.** +- `gh-pages` for our hosted site, which includes our analytics tracking code. **Please avoid using this branch.** + + +## Author + +**Mark Otto** +- +- + + +## License + +Open sourced under the [MIT license](LICENSE.md). + +<3 diff --git a/_site/site/archieve.html b/_site/site/archieve.html new file mode 100644 index 00000000..1d529fb3 --- /dev/null +++ b/_site/site/archieve.html @@ -0,0 +1,439 @@ +

Blog Posts

+ + diff --git a/_site/site/atom.xml b/_site/site/atom.xml new file mode 100644 index 00000000..7c28fad4 --- /dev/null +++ b/_site/site/atom.xml @@ -0,0 +1,17028 @@ + + + + + + + 2023-02-12T14:07:39-05:00 + + + + + + + + + Toolformer - Language Models Can Teach Themselves to Use Tools + + 2023-02-10T00:00:00-05:00 + /site/2023/02/10/Toolformer - Language Models Can Teach Themselves to Use Tools + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents Toolformer, a language model that uses simple APIs to use external tools (calculator, QA system, search engine, translation system, and calendar).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2302.04761">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Starting with a language model, M, the goal is to enable the language model to use tools by invoking API calls.</p> + </li> + <li> + <p>An API call is denoted by the tuple $c =$ (api_name, api_input). It can be linearized as $e(c) =$ [api_name(api_input)$]$ or as $e(c, r) = [$api_name(api_input) $ -&gt; r]$ where $r$ denotes the result of the API.</p> + </li> + <li> + <p>The given dataset of plain text, $C$, is converted into a dataset $C*$ augmented with the API calls using a three-step process.</p> + </li> + <li> + <p>In the first step, a position ($i$) and API call candidates (for the position $i$) are sampled.</p> + + <ul> + <li> + <p>Positions are sampled by (i) computing the probability that M assigns to starting an API call for each position and (ii) retaining the top-$k$ positions with a probability greater than a threshold value.</p> + </li> + <li> + <p>For each of the sampled positions (say $i$), API calls are sampled by concatenating a prompt to the tokens till index $i$ and sampling from the model M. Examples that do not generate the “end of the API” token (i.e.,”]”) are discarded.</p> + </li> + </ul> + </li> + <li> + <p>In the second step, the API calls are executed to obtain response $r$ (text sequence).</p> + + <ul> + <li>API calls are filtered using the following criteria: if providing M with both the input and the output of the API makes it easier for M to predict the future token, compared to not using the API call at all or just using the input to the API, then the API call is helpful for M, and the example should be retained.</li> + </ul> + </li> + <li> + <p>In the last step, the remaining API calls are merged to obtain the augmented dataset $C*$ that is used for finetuning M.</p> + </li> + <li> + <p>Note that $C*$ contains $C$, so M is finetuned on the original dataset and examples where a tool is helpful.</p> + </li> + <li> + <p>During inference, the model is used for decoding in the usual way. Decoding is stopped when it produces the “-&gt;” token, and the corresponding API is used to generate the response. The decoding process (using the model) resumes with the API output appended to the decoded text.</p> + </li> +</ul> + +<h2 id="tools">Tools</h2> + +<ul> + <li> + <p>There are two constraints on the tools: (i) their input and output should be expressible as text, and (ii) few demonstrations can be obtained from the tools. The second constraint means that the tool should be useable or accessible.</p> + </li> + <li> + <p>The paper considered the following tools: a question-answering system, a Wikipedia search engine, a calculator, a calendar, and a machine translation system. Of these, only the calculator and calendar are non-neural network tools.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Subset of CCNet is used as the language modeling dataset.</p> + </li> + <li> + <p>GPT-J is used as the language model.</p> + </li> + <li> + <p>For finetuning, the batch size is 128, the learning rate is 1e-5, and a linear warmup for the first 10% of training is used.</p> + </li> + <li> + <p>Following models are compared:</p> + + <ul> + <li> + <p>GPT-J: Regular GPT-J model without any finetuning.</p> + </li> + <li> + <p>GPT-J + CC: GPT-J finetuned on $C$ without any API calls.</p> + </li> + <li> + <p>Toolformer, i.e. GPT-J finetuned on $C*$.</p> + </li> + <li> + <p>Toolformer with API calls disabled during training.</p> + </li> + <li> + <p>OPT 66B</p> + </li> + <li> + <p>GPT-3</p> + </li> + </ul> + </li> + <li> + <p>The models are evaluated in the prompted zero-shot setup, where models are instructed to solve a task without any in-context examples.</p> + </li> + <li> + <p>One difference from the standard greedy decoding is that the API call is used whenever it is one of the top-10 most likely next tokens. This is done to increase the use of API calls.</p> + </li> + <li> + <p>Evaluation Tasks</p> + + <ul> + <li> + <p>SQuAD, GoogleRE, and T-REx subsets of the LAMA benchmark where the model has to complete a short statement with a missing fact.</p> + + <ul> + <li> + <p>Since LAMA questions are based on Wikipedia, Toolformer isn’t allowed to use Wikipedia search.</p> + </li> + <li> + <p>The evaluation criteria is to check if the correct word is among the first five words predicted by the model.</p> + </li> + <li> + <p>Toolformer uses the question-answering tool for most cases, outperforming all the baselines.</p> + </li> + </ul> + </li> + <li> + <p>Math Dataset</p> + + <ul> + <li> + <p>eSDiv, SVAMP, and MAWPS benchmarks.</p> + </li> + <li> + <p>The first number predicted by the model is considered to be the output.</p> + </li> + <li> + <p>Toolformer uses the calculator tool for most cases, thereby outperforming all the baselines.</p> + </li> + </ul> + </li> + <li> + <p>Question Answering</p> + + <ul> + <li> + <p>Web Questions, Natural Questions, and TriviaQA datasets.</p> + </li> + <li> + <p>The evaluation criteria is to check if the correct word is among the first 20 words predicted by the model.</p> + </li> + <li> + <p>Question Answering tool is disabled for this setup.</p> + </li> + <li> + <p>Toolformer uses the Wikipedia tool for most cases, thereby outperforming all the baselines other than the much larger GPT-3 model.</p> + </li> + </ul> + </li> + <li> + <p>Multilingual Question Answering</p> + + <ul> + <li> + <p>MLQA benchmark.</p> + </li> + <li> + <p>The evaluation criteria is to check if the correct word is among the first ten words predicted by the model.</p> + </li> + <li> + <p>Toolformer uses the translation tool for most of the questions, with questions in Hindi being an exception.</p> + </li> + <li> + <p>However, Toolformer does not consistently outperform the GPT-J baseline, likely because, for some languages, finetuning on CCNet could hurt performance.</p> + </li> + </ul> + </li> + <li> + <p>Temporal Datasets</p> + + <ul> + <li> + <p>TEMPLAMA (cloze style queries where the answer changes with time) and DATESET (dataset generated through a series of templates and populated with random dates/durations).</p> + </li> + <li> + <p>While Toolformer outperforms the baselines for both datasets, it relies on the Wikipedia search and Question Answering tools (and not the calendar tool) for the LAMA dataset. On the DATESET dataset, it uses the calendar tool in the majority.</p> + </li> + </ul> + </li> + <li> + <p>Language Modeling</p> + + <ul> + <li> + <p>WikiText and a subset of 10,000 randomly selected documents from CCNet (not used during training of M).</p> + </li> + <li> + <p>Training on $C*$ does not increase perplexity (compared to training on C). In this experiment, the API calls are disabled during inference.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Varying the size of the underlying models show that the ability to use tools emerges only around 755M parameters.</p> + </li> +</ul> + +<h2 id="future-work">Future Work</h2> + +<ul> + <li> + <p>Extending Toolformer to chain the use of tools and use tools interactively.</p> + </li> + <li> + <p>In some cases, the use of tools is very sample-inefficient.</p> + </li> + <li> + <p>Decision to use a tool does not account for the cost of using the tool.</p> + </li> +</ul> + + + + + Synthesized Policies for Transfer and Adaptation across Tasks and Environments + + 2021-03-29T00:00:00-04:00 + /site/2021/03/29/Synthesized Policies for Transfer and Adaptation across Tasks and Environments + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper studies transfer learning in RL, focusing on simultaneous transfer across both tasks and environments.</p> + </li> + <li> + <p>The key idea is to learn task and environment embeddings and compose them using a meta-rule, and the proposed approach is called SYNPO (Synthesized Policies).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1904.03276">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Three settings considered:</p> + + <ul> + <li> + <p><em>S1</em>: Transfer to a new (environment, task) pair when the agent has been trained on the environment and the task before (but not simultaneously).</p> + </li> + <li> + <p><em>S2</em>: Transfer to a new (environment, task) pair where either the environment or the task is not seen previously.</p> + </li> + <li> + <p><em>S3</em>: Transfer to a new (environment, task) pair where neither the environment nor the task is seen previously.</p> + </li> + </ul> + </li> + <li> + <p>In the second and third settings, the agent is allowed to collect some data in the new environment or task.</p> + </li> + <li> + <p>The (environment, task) combinations that the agent has seen during training are referred to as <em>seen</em> combinations, while the remaining combinations are referred to as the <em>unseen</em> combinations.</p> + </li> + <li> + <p>The key idea is to:</p> + + <ul> + <li> + <p>learn embeddings of environments and tasks</p> + </li> + <li> + <p>use these embeddings to compose a policy (parameterized as the linear combination of the policy basis).</p> + </li> + </ul> + </li> + <li> + <p>A disentanglement objective is used to decouple the task and environment embedding.</p> + </li> +</ul> + +<h3 id="policy-composition">Policy Composition</h3> + +<ul> + <li> + <p>Given an (environment, task) pair $z = (\epsilon, \tau)$, the policy is given as $\pi_z(a|s) \propto exp(\psi_s^TU(e_{\epsilon}, e_{\tau})\phi_{a} + b_{\pi} )) $.</p> + </li> + <li> + <p>Here $b_{\pi}$ is a scalar bias, $\psi_{s}$ and $\phi_{a}$ are state and action representations, $U$ is parameterized as the linear comination of $K$ basis matrices $\Theta_k$</p> + </li> + <li> + <p>$U(e_{\epsilon}, e_{\tau}) = \sum_{k=1}^{K}\alpha_k(e_{\epsilon}, e_{\tau})\Theta_k$.</p> + </li> + <li> + <p>The basis matrices (denoted by $\Theta_k$) are shared across tasks while the coefficients ($\alpha_k$) are specific to the (environment, task) pair.</p> + </li> + <li> + <p>During training, the agent also predicts rewards using the same set of basis but different coefficients.</p> + </li> +</ul> + +<h3 id="disentangling-environment-and-task-embeddings">Disentangling environment and task embeddings</h3> + +<ul> + <li> + <p>Given an (environment, task) pair, the agent is trained to decode the environment (and task) given the agent’s trajectory.</p> + </li> + <li> + <p>The sequence of state-action pairs (in the trajectory) is mapped to a sequence of state-action representations, given by $\psi_s^T\Theta_k\phi_{a}$</p> + </li> +</ul> + +<h2 id="experiment-setup">Experiment Setup</h2> + +<ul> + <li>The agent is trained (and evaluated) on imitation learning (mostly) and reinforcement learning setup.</li> +</ul> + +<h3 id="environments">Environments</h3> + +<ul> + <li> + <p>GRIDWORLD</p> + + <ul> + <li> + <p>Twenty $16 \times 16$ gird-aligned mazes that are similar in appearance but differ in topology.</p> + </li> + <li> + <p>The task is to collect colored blocks in a given order. In each task, the starting position of the agent and the position of the blocks is randomized.</p> + </li> + <li> + <p>Each environment has 20 tasks, leading to a total of 400 (environment, task) combinations.</p> + </li> + </ul> + </li> + <li> + <p><a href="https://arxiv.org/abs/1712.05474">THOR</a></p> + + <ul> + <li> + <p>This is a 3D simulator where the agent is placed in indoor photo-realistic scenes.</p> + </li> + <li> + <p>The task is the search for objects and perform actions like “put cabbage on the fridge.”</p> + </li> + <li> + <p>The setup uses 19 scenes (environments), with each environment comprising of 21 tasks.</p> + </li> + </ul> + </li> +</ul> + +<h3 id="baselines">Baselines</h3> + +<ul> + <li> + <p>MLPs that concatenate state, environment embeddings, and task embedding.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1606.05312">Successor feature model</a></p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1609.07088">Module Network</a></p> + </li> + <li> + <p>Multi-task Learning where the distinction between the environments is ignored.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>GRIDWORLD</p> + + <ul> + <li> + <p>In the first setting (<em>S1</em>)</p> + + <ul> + <li> + <p>SYNPO outperforms all the baselines.</p> + </li> + <li> + <p>As the agent is trained on more (environment, task) combinations, its performance on the unseen combinations improves. This trend saturates when the <em>seem/total</em> ratio reaches about 0.4 (i.e., training on 40% of all the combinations).</p> + </li> + <li> + <p>Task disentanglement is more important than environment disentanglement.</p> + </li> + </ul> + </li> + <li> + <p>In the second and third setting (<em>S2</em> and <em>S3</em>)</p> + + <ul> + <li> + <p>The agent uses one demonstration from each test pair to finetune the embeddings.</p> + </li> + <li> + <p><em>S2</em> is an easier setting than <em>S3</em>.</p> + </li> + <li> + <p>Transfer learning across tasks is easier than transfer learning across environments.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>THOR</p> + + <ul> + <li>SYNPO outperforms all the baselines on both seen and unseen combinations.</li> + </ul> + </li> +</ul> + + + + + + Deep Neural Networks for YouTube Recommendations + + 2021-03-22T00:00:00-04:00 + /site/2021/03/22/Deep Neural Networks for YouTube Recommendations + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes YouTube’s deep learning-based recommendation system.</p> + </li> + <li> + <p><a href="https://research.google/pubs/pub45530/">Link to the paper</a></p> + </li> +</ul> + +<h2 id="challenges">Challenges</h2> + +<ul> + <li> + <p>Scale - Very large number of users and videos.</p> + </li> + <li> + <p>Freshness - Very large number of videos uploaded every hour. The recommendation system should take these new videos into account as well.</p> + </li> + <li> + <p>Noise - User satisfaction needs to be modeled from noisy implicit feedback signal as the explicit signal is very sparse.</p> + </li> +</ul> + +<h2 id="system-overview">System Overview</h2> + +<ul> + <li> + <p>Two neural networks: one for candidate generation and another one for ranking.</p> + </li> + <li> + <p>Metrics</p> + + <ul> + <li> + <p>Offline metrics like precision, recall, ranking loss</p> + </li> + <li> + <p>A/B testing via live experiments</p> + </li> + </ul> + </li> +</ul> + +<h3 id="candidate-generation">Candidate Generation</h3> + +<ul> + <li> + <p>Input: events from a user’s YouTube activity history.</p> + </li> + <li> + <p>Output: small subset (hundreds) of videos.</p> + </li> + <li> + <p>Approach:</p> + + <ul> + <li> + <p>Recommendation is modeled as extreme multiclass classification.</p> + </li> + <li> + <p>Predict the video (from a corpus) that a user will watch at a given time.</p> + </li> + <li> + <p>The neural network’s task is to learn useful user embeddings, given the user’s context and history.</p> + </li> + <li> + <p>For each positive class (relevant video), negative classes (non-relevant videos) are sampled from the video corpus.</p> + </li> + </ul> + </li> + <li> + <p>Model Architecture</p> + + <ul> + <li> + <p>A feedforward network with input as user embeddings and context embeddings (watch history).</p> + </li> + <li> + <p>Watch history is a variable-length sequence of video ids, where each video id is mapped to an embedding.</p> + </li> + <li> + <p>The sequence of video ids is mapped to a sequence of embeddings, and this sequence is averaged to obtain fixed-sized embedding.</p> + </li> + <li> + <p>Additional signals like demographic features and search query embeddings can be added along with the context embeddings.</p> + </li> + <li> + <p>The age of a video is also used as a feature during training to account for the freshness of the content. This feature is set to zero (or slightly negative) during inference.</p> + </li> + </ul> + </li> + <li> + <p>Other Insights</p> + + <ul> + <li> + <p>Training examples are generated from all YouTube watches, including the watches from the videos embedded on other sites, to surface new content.</p> + </li> + <li> + <p>Generating the same number of training examples per user is important to avoid a small set of active users from dominating the model training.</p> + </li> + <li> + <p>Predicting a user’s next watch leads to better results than predicting a randomly held-out watch. This can be attributed to the general consumption pattern of videos (e.g., episodes are usually watched in order).</p> + </li> + </ul> + </li> +</ul> + +<h3 id="ranking">Ranking</h3> + +<ul> + <li> + <p>Input: list of candidate videos to rank from.</p> + </li> + <li> + <p>Output: score for each video.</p> + </li> + <li> + <p>Approach</p> + + <ul> + <li>A feedforward network (similar to candidate generation model) trained using logistic regression loss.</li> + </ul> + </li> + <li> + <p>Feature representation</p> + + <ul> + <li> + <p>Different types of features: categorical vs. continuous, univalent vs. multivalent, describes video vs. describes user or context.</p> + </li> + <li> + <p>Important signals include user’s interaction with the video (or similar videos), which source/channel added the video to the candidate set.</p> + </li> + <li> + <p>Embeddings are shared across features. For example, the representation for a video id remains the same, irrespective of whether it is being used for representing the “video to recommend” or the “last seen video.”</p> + </li> + <li> + <p>Feature normalization and transformations like exponents (square or square root) for continuous variables improve the performance.</p> + </li> + </ul> + </li> + <li> + <p>To model the expected watch time, the logistic regression loss is weighted by the observed watch time. For example, if a video was watched, its weight is given by the observed watch time, and if the video was not watched, its weight is set to 1.</p> + </li> + <li> + <p>In practice, this means that the logistic regression model learns odds that approximate the expected watch time of the video.</p> + </li> +</ul> + + + + + The Tail at Scale + + 2021-03-15T00:00:00-04:00 + /site/2021/03/15/The Tail at Scale + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents some causes for (temporary) high-latency episodes in large-scale online systems and techniques to mitigate their impact so that the tail of latency distribution remains short.</p> + </li> + <li> + <p><a href="https://research.google/pubs/pub40801/">Link to the paper</a></p> + </li> +</ul> + +<h2 id="why-does-variability-in-response-time-exist">Why does variability in response time exist</h2> + +<ul> + <li> + <p>Shared resources between processes on the same node</p> + </li> + <li> + <p>Background processes (daemons) could use cause a momentary spike in resource usage.</p> + </li> + <li> + <p>Processes running on different nodes may contend for global resources like shared file systems.</p> + </li> + <li> + <p>Maintenance activities like disk compaction or garbage collection.</p> + </li> + <li> + <p>Others like queueing, power limits, or energy management.</p> + </li> + <li> + <p>In the case of large-scale systems, the component-level variability is further amplified.</p> + </li> +</ul> + +<h2 id="reducing-component-variability">Reducing Component Variability</h2> + +<ul> + <li> + <p>Use differentiated service classes to prioritize user requests over non-interactive requests.</p> + </li> + <li> + <p>Reduce head-of-line blocking by breaking long-running requests into smaller requests.</p> + </li> + <li> + <p>Synchronize maintenance jobs across nodes to minimize the window for high latency.</p> + </li> + <li> + <p>Caching generally does not help to address tail latency.</p> + </li> +</ul> + +<h2 id="adapting-to-latency-variability">Adapting to Latency Variability</h2> + +<ul> + <li> + <p>Two categories of adaptation approaches</p> + + <ul> + <li> + <p>Within Request Short-Term Adaptations</p> + + <ul> + <li> + <p>These approaches are more relevant for services that perform many read queries on loosely consistent datasets.</p> + </li> + <li> + <p>Hedged Request</p> + + <ul> + <li> + <p>Send the request to multiple replicas, and once one of the replicas returns the result, cancel the other requests.</p> + </li> + <li> + <p>In practice, start by sending the request to only one replica. Send the secondary requests if the first request is outstanding for more than $95^{th}$ percentile of expected latency.</p> + </li> + <li> + <p>This introduces an additional $5\%$ load while substantially shortening the latency tail.</p> + </li> + <li> + <p>This approach work because often, the cause of latency is not the query itself but other factors like overloaded nodes.</p> + </li> + </ul> + </li> + <li> + <p>Tied Request</p> + + <ul> + <li> + <p>Hedged request approach makes a tradeoff regarding how long to wait before initiating requests to other replicas. The sooner the request is made, the lower should be the latency in serving the request, but more will be the overall load in the system.</p> + </li> + <li> + <p>The load in the system can be reduced by “tieing” requests (sent to different replicas) so that as soon as one replica starts processing the request, it can notify the other replicas, which could drop the request or deprioritize it.</p> + </li> + <li> + <p>In practice, “tieing” requests means that each replica has the identity of other replicas which may execute the request.</p> + </li> + <li> + <p>Note that there is a short window (of the average network message delay) when multiple replicas could start executing the request. This can be mitigated if the client (issuing the requests) introduces a delay to twice the average network message delay.</p> + </li> + </ul> + </li> + <li> + <p>Submit the request to the least loaded replica</p> + + <ul> + <li>This is less effective for reasons like the load on a replica can change after the request is made but before it is executed.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>Cross-Request Long-Term Adaptations</p> + + <ul> + <li> + <p>These approaches are more relevant for situations where different services have different throughput.</p> + </li> + <li> + <p>Micro-partitions</p> + + <ul> + <li> + <p>Generate more paritions than the number of nodes.</p> + </li> + <li> + <p>The partitions can be dynamically assigned to machines to ensure proper load balancing.</p> + </li> + <li> + <p>In case of machine failure, many nodes can be used to quickly re-create the micro-partitions instead of waiting on one machine to read one single large partition.</p> + </li> + </ul> + </li> + <li> + <p>Selective Replication</p> + + <ul> + <li>With micro-partitioning, replicas for micro-partitions can be created ahead of time to achieve good load balancing.</li> + </ul> + </li> + <li> + <p>Latency induced probation</p> + + <ul> + <li>In some cases, removing a slow node can improve the overall latency of the system. The probated node can be re-incorporated when its latency improves.</li> + </ul> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Large Information Retrieval Systems</p> + + <ul> + <li> + <p>In such systems, speed can be more critical than the quality of the result.</p> + </li> + <li> + <p>The system should return a “good enough” result that is available with low latency instead of waiting for the “best result” that is available with high latency.</p> + </li> + <li> + <p>In some cases, a request could trigger an unexpected code path or cause some other exception that could slow down the entire system.</p> + </li> + <li> + <p>In such cases, the <em>canary request</em> technique can be used where the system sends the request initially to only 1 or 2 nodes. The request is sent over to the other nodes only after receiving a successful response from the initial nodes.</p> + </li> + </ul> + </li> + <li> + <p>Requests that update state are easier to handle for several reasons:</p> + + <ul> + <li> + <p>The scale of latency-critical modifications is generally small.</p> + </li> + <li> + <p>The update can be performed asynchronously after responding to the user.</p> + </li> + <li> + <p>Quorum-based approaches (often used for ensuring consistent updates) are inherently tail-tolerant.</p> + </li> + </ul> + </li> +</ul> + + + + + Practical Lessons from Predicting Clicks on Ads at Facebook + + 2021-03-08T00:00:00-05:00 + /site/2021/03/08/Practical Lessons from Predicting Clicks on Ads at Facebook + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes several design choices for developing a model for predicting user response (clicks) on ads.</p> + </li> + <li> + <p><a href="https://research.fb.com/publications/practical-lessons-from-predicting-clicks-on-ads-at-facebook/">Link to the paper</a></p> + </li> +</ul> + +<h2 id="experimental-setup">Experimental Setup</h2> + +<ul> + <li> + <p>The model is trained/evaluated on offline data.</p> + </li> + <li> + <p>Evaluation metrics:</p> + + <ul> + <li> + <p>Normalized Cross-Entropy (or Normalized Entropy, NE)</p> + + <ul> + <li> + <p>Defined as the predictive log-loss per impression, divided by the entropy of the background CTR (click-through rate).</p> + </li> + <li> + <p>Background CTR is the average empirical CTR of the training data.</p> + </li> + <li> + <p>Lower normalized cross-entropy is better.</p> + </li> + <li> + <p>The normalization term is important to make the metric insensitive to the background CTR. Otherwise, the log loss can easily be made low when background CTR is close to 0 or 1.</p> + </li> + <li> + <p>NE can also be written as $RIG - 1$, where $RIG$ is the Relative Information Gain.</p> + </li> + </ul> + </li> + <li> + <p>Calibration</p> + + <ul> + <li>Ratio of average estimated CTR and empirical CTR.</li> + </ul> + </li> + <li> + <p>Area-Under-ROC (AUC) is a good metric for measuring ranking quality (among ads). However, it is <strong>not used</strong> as a metric to avoid over-delivery or under-delivery of ads.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="implementation-details">Implementation Details</h2> + +<ul> + <li> + <p>Feature Transformation</p> + + <ul> + <li> + <p>A given add impression, $e$, is transformed into a $n-$dimensional vector, $x$, where the $i^{th}$ index denotes the value of the $i^{th}$ categorical feature.</p> + </li> + <li> + <p>Continous features are binned, and the bin index is used as a categorical feature, thus applying a non-linear transformation to the features.</p> + </li> + <li> + <p>Categorical features that are tuple-like (i.e., have a tuple of values) can be converted into new categorical features by taking a cartesian product.</p> + </li> + <li> + <p>Boosted decision trees can be used to implement the previous two transformations in one go.</p> + + <ul> + <li> + <p>Each tree is used as a categorical feature that takes the value of the index of the leaf node than an ad maps to.</p> + </li> + <li> + <p>The paper used the Gradient Boosting Machine with the $L_2-$TreeBoost algorithm.</p> + </li> + <li> + <p>Using the tree feature transformation improves the Normalized Cross-Entropy by $3.4\%$.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Model</p> + + <ul> + <li> + <p>Logistic Regression (LR) or Bayesian online learning scheme for probit regression (BOPR) algorithms are used for training a linear classifier model.</p> + </li> + <li> + <p>While both LR and BOPR models provide similar performance, the LR model is half the BOPR model’s size and faster for performing training/inference.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="role-of-data-freshness">Role of Data Freshness</h2> + +<ul> + <li> + <p>When a model is trained on the data from a particular day and evaluated on data from the subsequent days, the model’s performance degrades as the delay between training and test set increases.</p> + </li> + <li> + <p>This highlights the importance of the freshness of the training data.</p> + </li> + <li> + <p>One straightforward approach can be to train the model every day.</p> + </li> + <li> + <p>Alternatively, the linear classifier can be trained using online learning, while the boosted decision tree can still be trained daily.</p> + </li> + <li> + <p>Different choices for setting the learning rate (for online training of linear classifier) are compared, and the <a href="https://research.google/pubs/pub41159/">per-coordinate learning rate</a> is found to perform best in practice.</p> + </li> +</ul> + +<h2 id="generating-real-time-training-data">Generating Real-Time Training Data</h2> + +<ul> + <li> + <p>An “online joiner” system is used to generate real-time training data for the linear classifier.</p> + </li> + <li> + <p>The challenging part is, while there are data points with a “positive” label (i.e., the user clicked on the ad), there are no datapoints with a “negative” label (since there is no “no-click” button that the user can click).</p> + </li> + <li> + <p>An impression is considered to have the “no-click” label if the user does not click on the ad within a (long) time window of seeing the ad.</p> + </li> + <li> + <p>Too short a time window could mislabel some impressions, while too long a time window will delay the real-time training data.</p> + </li> + <li> + <p>The online joiner performs a distributed stream-to-stream join on the stream of ad impressions and stream of ad clicks using a HashQueue.</p> + </li> + <li> + <p>A HashQueue:</p> + + <ul> + <li> + <p>comprises of a First-In-First-Out queue as a buffer window and a hash map for fast random access to label impressions.</p> + </li> + <li> + <p>supports three operations on key-value pairs: enqueue, dequeue, and lookup.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="memory-and-latency">Memory and Latency</h2> + +<ul> + <li> + <p>Increasing the number of boosting trees shows diminishing returns, and most of the improvements come from the first 500 trees.</p> + </li> + <li> + <p>Top 10 features account for half of the total feature importance, while the last 300 features add less than 1% feature importance.</p> + </li> + <li> + <p>Features in the boosting model can be broadly classified as contextual or historical.</p> + </li> + <li> + <p>Historical feature provides much more explanatory power than the contextual features through contextual features are helpful to handle the cold start problem.</p> + </li> + <li> + <p>Models trained with just the contextual features rely more heavily on data freshness than models trained with just the historical features.</p> + </li> + <li> + <p>Uniform subsampling and negative downsampling techniques are used to limit the amount of training data.</p> + </li> + <li> + <p>In the case of negative downsampling, the model needs to be re-calibrated as well.</p> + </li> +</ul> + + + + + Ad Click Prediction - a View from the Trenches + + 2021-03-01T00:00:00-05:00 + /site/2021/03/01/Ad Click Prediction - a View from the Trenches + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents case studies from the experience of deploying an ad click-through rate (CTR) prediction model at Google.</p> + </li> + <li> + <p>The paper focuses on themes related to memory footprint, performance analysis, calibration, confidence in the predictions, and feature engineering.</p> + </li> + <li> + <p><a href="https://research.google/pubs/pub41159/">Link to the paper</a></p> + </li> +</ul> + +<h2 id="system-overview">System Overview</h2> + +<ul> + <li> + <p>Features (corresponding to a given ad) include search query and the metadata in the ad. The features are very sparse.</p> + </li> + <li> + <p>Single layer, regularized Logistic Regression model is trained with Online Gradient Descent (same as Stochastic Gradient Descent, but in the online setting).</p> + </li> + <li> + <p>From a memory perspective, it is important to minimize the size of the final model.</p> + </li> + <li> + <p>Adding just the L1 penalty is not sufficient to produce weights that are precisely equal to 0.</p> + </li> + <li> + <p><a href="http://proceedings.mlr.press/v15/mcmahan11b.html">“Follow The (Proximally) Regularized Leader” algorithm or FTRL-Proximal algorithm</a> is used to learn sparse models without losing on the accuracy.</p> + </li> + <li> + <p>Using per-coordinate learning rates improves the performance at the cost of memory as both the sum of gradients and the sum of the square of gradients are tracked for each feature.</p> + + <ul> + <li> + <p>In practice, some of the cost can be alleviated by approximating that all the events containing a given feature have the same probability.</p> + </li> + <li> + <p>In such a case, the sum of the square of gradients can be approximated using the counts of positive and negative events alone.</p> + </li> + </ul> + </li> + <li> + <p>Some memory overhead can be reduced based on the following observation: the vast majority of features are extremely rare. Hence, it is not necessary to track the statistics for such rare features.</p> + + <ul> + <li> + <p>However, in an online setting, it is not known upfront as to which features will be sparse.</p> + </li> + <li> + <p>The paper proposes to use probabilistic feature inclusion - a feature is added to the model with probability $p$. Once it is added, the feature is not removed.</p> + </li> + <li> + <p>An alternative approach is to use a rolling set of counting Bloom filters to check if a feature has appeared at least $n$ times in training. Bloom filters are probabilistic data structures and can return false positives.</p> + </li> + </ul> + </li> + <li> + <p>Memory can also be saved by using fewer bits for encoding weights.</p> + + <ul> + <li> + <p>Most of the weight coefficients lie in the range $(-2, 2)$, and a $16-$ bit encoding is used in place of $32$ or $64$ bit encoding.</p> + </li> + <li> + <p>This quantization approach needs to account for roundoff problems. The fix is easy to implement.</p> + </li> + </ul> + </li> + <li> + <p>When training many models with similar hyperparameters, per-model learning rate counters can be replaced by statistics shared by all the models, thus reducing memory footprint.</p> + </li> + <li> + <p>A Single Value Structure is used to reduce the memory footprint when evaluating a very large set of model variants that differ only in addition/removal of a small subset of features.</p> + + <ul> + <li> + <p>All the models, that use a feature, share a single value structure corresponding to the feature. This reduces the memory overhead by order of magnitude.</p> + </li> + <li> + <p>During the update, each model computes the weight updates corresponding to all the features that it is using. The updated weight is averaged across all the models and used to update the single value structure.</p> + </li> + </ul> + </li> + <li> + <p>Since CTR datasets are generally highly imbalanced, the training data (for the negative class) can be subsampled to reduce the amount of data to train over. The loss component (corresponding to negative class) can be appropriately scaled up.</p> + </li> + <li> + <p>Metrics</p> + + <ul> + <li> + <p>Offline metrics like AucLoss (1 - AUC), Log Loss, Squared Error</p> + </li> + <li> + <p>Online loss is computed on the new training data (new incoming traffic) before training on it.</p> + </li> + </ul> + </li> + <li> + <p>The confidence in the model’s prediction is estimated using a heuristic called <em>uncertainty score</em>. It can be measured using the dot product of the feature and the vector of learning rates.</p> + + <ul> + <li> + <p>The idea is that the learning rates already maintain a notion of uncertainty.</p> + </li> + <li> + <p>Features for which the learning rate is high are the features for which uncertainty is also high.</p> + </li> + </ul> + </li> + <li> + <p>Calibrating Predictions</p> + + <ul> + <li> + <p>The calibration can be improved by applying correction functions $\tau_d(p)$ where $p$ is the predicted CTR, and $d$ is an element of a partition of the training data.</p> + </li> + <li> + <p>$\tau$ can be modeled as $\gamma^{\kappa}$ where $\gamma$ and $\kappa$ are learned using Poisson regression.</p> + </li> + </ul> + </li> + <li> + <p>Unsuccessful Experiments</p> + + <ul> + <li> + <p>Aggressive feature hashing was tried to reduce the memory overhead. However, it leads to a significant loss in performance.</p> + </li> + <li> + <p>Using dropout did not help, probably because the features are sparse.</p> + </li> + <li> + <p>Using feature bagging hurt the AucLoss.</p> + </li> + <li> + <p>Feature vector normalization did not improve performance, probably because of per-coordinate learning rates and regularization.</p> + </li> + </ul> + </li> +</ul> + + + + + Anatomy of Catastrophic Forgetting - Hidden Representations and Task Semantics + + 2021-02-22T00:00:00-05:00 + /site/2021/02/22/Anatomy of Catastrophic Forgetting - Hidden Representations and Task Semantics + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper studies the effect of catastrophic forgetting on representations in neural networks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2007.07400">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Techniques:</p> + + <ul> + <li> + <p>Representational Similarity Measures</p> + </li> + <li> + <p>Layer Freezing</p> + </li> + <li> + <p>Layer Reset</p> + </li> + </ul> + </li> + <li> + <p>Datasets</p> + + <ul> + <li> + <p>Split CIFAR-10</p> + + <ul> + <li> + <p>CIFAR-10 dataset is split into <em>m</em> (=2) tasks, where each task is a <em>n</em> way classification task.</p> + </li> + <li> + <p>The underlying network has a shared trunk with <em>m</em> heads, one head per task.</p> + </li> + </ul> + </li> + <li> + <p>Split CIFAR-100 Distribution Shift</p> + + <ul> + <li>Each task requires distinguishing between <em>n</em> CIFAR-100 <em>superclasses</em> with training/test data corresponding to a <em>subset</em> of constituent classes.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>Network Architecture</p> + + <ul> + <li>VGG, ResNet and DenseNet</li> + </ul> + </li> +</ul> + +<h2 id="questions">Questions</h2> + +<ul> + <li> + <p>Are all representations (throughout the network) equally responsible for forgetting?</p> + + <ul> + <li> + <p><em>Higher</em> layer (layers closer to the output) are the primary source of catastrophic forgetting.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1905.00414">Central Kernel Alignment (CKA)</a> technique is used to compare the similarity between the layer representations, before and after training on the second task.</p> + </li> + <li> + <p>Higher layer representations change significantly when training over two tasks while the lower layer representations remain stable.</p> + </li> + <li> + <p>When finetuning on the second task, freezing the lower layers has only a minor effect on the accuracy of the second task.</p> + </li> + <li> + <p>In <em>layer reset</em> experiments, after training on the second task, the weights of some of the layers are reset to their values after training on the first task.</p> + + <ul> + <li>Resetting the weights of higher layers leads to significant improvement in the performance on the first task.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>Do common approaches for countering catastrophic forgetting work by stabilizing the higher layers?</p> + + <ul> + <li> + <p>Yes - both <a href="https://arxiv.org/abs/1612.00796">EWC</a> and replay-based approaches counter catastrophic forgetting work by stabilizing the higher layers.</p> + </li> + <li> + <p>This is demonstrated by showing that as the quadratic penalty for EWC (or fraction of data from replay buffer) increases (to reduce catastrophic forgetting), the representations for higher layers change less during the second task.</p> + </li> + </ul> + </li> + <li> + <p>When training over a sequence of tasks, are similar tasks more likely to be forgotten than different tasks?</p> + + <ul> + <li> + <p>Setup I</p> + + <ul> + <li> + <p>Training over a sequence of two binary classification tasks.</p> + </li> + <li> + <p>Task 1: Two related classes (say <code class="language-plaintext highlighter-rouge">ship</code> and <code class="language-plaintext highlighter-rouge">truck</code>)</p> + </li> + <li> + <p>Task 2: Two related classes, which may or may not be related to the classes for Task 1. For example, the classes could be</p> + + <ul> + <li> + <p><code class="language-plaintext highlighter-rouge">cat</code> and <code class="language-plaintext highlighter-rouge">horse</code> (not related to classes of the first task)</p> + </li> + <li> + <p><code class="language-plaintext highlighter-rouge">plane</code> and <code class="language-plaintext highlighter-rouge">car</code> (related to the classes of the first task)</p> + </li> + </ul> + </li> + <li> + <p>Training over semantically similar tasks (here <code class="language-plaintext highlighter-rouge">plane</code> and <code class="language-plaintext highlighter-rouge">car</code>) leads to less forgetting.</p> + </li> + </ul> + </li> + <li> + <p>Setup II</p> + + <ul> + <li> + <p>Training over a sequence of two classification tasks.</p> + </li> + <li> + <p>Task 1: Four classes where the classes can be grouped into two groups (say <code class="language-plaintext highlighter-rouge">deer</code>, <code class="language-plaintext highlighter-rouge">dog</code>, <code class="language-plaintext highlighter-rouge">ship</code> and <code class="language-plaintext highlighter-rouge">truck</code>)</p> + </li> + <li> + <p>Task 2: Two related classes, which may be related to group 1 or group 2. For example, the classes could be two animals or two objects.</p> + </li> + <li> + <p>After training on the second task, classes (from Task 1), which are in the different group as classes from Task 2, are forgotten less.</p> + </li> + </ul> + </li> + <li> + <p>Conclusion</p> + + <ul> + <li> + <p>Task representational similarity is a function of both underlying data and optimization procedure.</p> + </li> + <li> + <p>Forgetting is most severe for task representations of intermediate similarity.</p> + </li> + <li> + <p>Representational similarity is necessary but not a sufficient condition for forgetting.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>How does catastrophic forgetting change as the task similarity changes?</p> + + <ul> + <li> + <p>If the model learns different representations for dissimilar tasks, increasing dissimilarity can help to avoid forgetting.</p> + </li> + <li> + <p>When training the two-task, two-class (per task) CIFAR-10 setup with an “others” class (classes not already used in the setup), the forgetting is reduced.</p> + </li> + </ul> + </li> +</ul> + + + + + When Do Curricula Work? + + 2021-02-15T00:00:00-05:00 + /site/2021/02/15/When Do Curricula Work + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper systematically investigates when does curriculum learning help.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2012.03107">Link to the paper</a></p> + </li> +</ul> + +<h2 id="implicit-curricula">Implicit Curricula</h2> + +<ul> + <li> + <p>Implicit curricula refers to the order in which a network learns data points when trained using stochastic gradient descent, with iid sampling of data.</p> + </li> + <li> + <p>When training, let us say that the model makes a correct prediction for a given datapoint in the $i^{th}$ epoch (and correct prediction in all the subsequent epochs). The $i^{th}$ epoch is referred to as the <em>learned iteration</em> of the datapoint (iteration in which the datapoint was learned).</p> + </li> + <li> + <p>The paper studied multiple models (VGG, ResNet, WideResNet, DenseNet, and EfficientNet) with different optimizers (Adam and SGD with momentum).</p> + </li> + <li> + <p>The resulting implicit curricula are broadly consistent within the model families, making the following discussion less dependent on the model architecture.</p> + </li> +</ul> + +<h2 id="explicit-curricula">Explicit Curricula</h2> + +<ul> + <li>When defining an explicit curriculum, three important components stand out.</li> +</ul> + +<h3 id="scoring-function">Scoring Function</h3> + +<ul> + <li> + <p>Maps a data point to a numerical score of <em>difficulty</em>.</p> + </li> + <li> + <p>Choices:</p> + + <ul> + <li> + <p>Loss function for a model</p> + </li> + <li> + <p><em>learned iteration</em></p> + </li> + <li> + <p>Estimated c-score - It captures a given model’s consistency to correctly predict a given datapoint’s label when trained on an iid dataset (not containing the datapoint).</p> + </li> + </ul> + </li> + <li> + <p>The three scoring functions are computed for two models on the CIFAR dataset.</p> + </li> + <li> + <p>The resulting six scores have a high Spearman Rank correlation. Hence for the rest of the discussion, only the c-score is used.</p> + </li> +</ul> + +<h3 id="pacing-function">Pacing Function</h3> + +<ul> + <li> + <p>This function, denoted by $g(t)$, controls the size of the training dataset at step $t$.</p> + </li> + <li> + <p>At step $t$, the model would be trained on the first $g(t)$ examples (as per the ordering).</p> + </li> + <li> + <p>Choices: logarithmic, exponential, step, linear, quadratic, and root.</p> + </li> +</ul> + +<h3 id="order">Order</h3> + +<ul> + <li> + <p>Order in which the data points are picked:</p> + + <ul> + <li> + <p><em>Curriculum</em> - Ordering points from lowest score to highest and training on the easiest data points first.</p> + </li> + <li> + <p><em>Anti Curriculum</em> - Ordering points from highest score to lowest and training on the hardest data points first.</p> + </li> + <li> + <p><em>Random</em> - Randomly selecting the data points to train on.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>The paper performed a hyperparameter sweep over 180 pacing functions and three orderings for three random seeds over the CIFAR10 and CIFAR100 datasets. For both the datasets, the best performance is obtained with random ordering, indicating that curricula did not give any benefits.</p> + </li> + <li> + <p>However, the curriculum is useful when the number of training iterations is small.</p> + </li> + <li> + <p>It also helps with noisy data training (which is simulated by randomly permuting the labels).</p> + </li> + <li> + <p>The observations for the smaller CIFAR10/100 dataset generalize to slightly larger datasets like FOOD101 and FOOD101N.</p> + </li> +</ul> + + + + + + Continual learning with hypernetworks + + 2021-02-08T00:00:00-05:00 + /site/2021/02/08/Continual learning with hypernetworks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes the use of task-conditioned <a href="https://shagunsodhani.com/papers-I-read/HyperNetworks">HyperNetworks</a> for lifelong learning / continual learning setups.</p> + </li> + <li> + <p>The idea is, the HyperNetwork would only need to remember the task-conditioned weights and not the input-output mapping for all the data points.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1906.00695">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/chrhenning/hypercl">Author’s Implementation</a></p> + </li> +</ul> + +<h2 id="terminology">Terminology</h2> + +<ul> + <li> + <p>$f$ denotes the network for the given $t^{th}$ task.</p> + </li> + <li> + <p>$h$ denotes the HyperNetwork that generates the weights for $f$.</p> + </li> + <li> + <p>$\Theta_{h}$ denotes the parameters of $h$.</p> + </li> + <li> + <p>$e^{t}$ denotes the input task-embedding for the $t^{th}$ task.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>When training on the $t^{th}$ task, the HyperNetworks generates the weights for the network $f$.</p> + </li> + <li> + <p>The current task loss is computed using the generated weights, and the candidate weight update ($\Delta \Theta_{h}$) is computed for $h$.</p> + </li> + <li> + <p>The actual parameter change is computed by the following expression:</p> + </li> +</ul> + +<p>$L_{total} = L{task}(\Theta_{h}, e^{T}, X^{T}, Y^{T}) + \frac{\beta_{output}}{T-1} \sum_{t=1}^{T-1} | f_{h}(e^{t}, \Theta_{h}^*) - f_{h}(e^{(t)}, \Theta_{h} + \Delta \Theta_{h} ))|^2$</p> + +<ul> + <li> + <p>$L_{task}$ is the loss for the current task.</p> + </li> + <li> + <p>$(X^{T}, Y^{T})$ denotes the training datapoints for the $T^{th}$ task.</p> + </li> + <li> + <p>$\beta_{output}$ is a hyperparameter to control the regularizer’s strength.</p> + </li> + <li> + <p>$\Theta_{h}^*$ denotes the optimal parameters after training on the $T-1$ tasks.</p> + </li> + <li> + <p>$\Theta_{h} + \Delta \Theta_{h}$ denotes the one-step update on the current $h$ model.</p> + </li> + <li> + <p>In practice, the task encoding $e^{t}$ is chunked into smaller vectors, and these vectors are fed as input to the HyperNetwork.</p> + </li> + <li> + <p>This enables the HyperNetwork to produce weights iteratively, instead of all at once, thus helping to scale to larger models.</p> + </li> + <li> + <p>The paper also considers the problem of inferring the task embedding from a given input pattern.</p> + </li> + <li> + <p>Specifically, the paper uses task-dependent uncertainty, where the task embedding with the least predictive uncertainty is chosen as the task embedding for the given unknown task. This approach is referred to as HNET+ENT.</p> + </li> + <li> + <p>The paper also considers using HyperNetworks to learn the weights for a task-specific generative model. This generative model will be used to generate pseudo samples for rehearsal-based approaches. The paper considers two cases:</p> + + <ul> + <li> + <p>HNET+R where the replay model (i.e., the generative model) is parameterized using a HyperNetwork.</p> + </li> + <li> + <p>HNET+TIR, where an auxiliary task inference classifier is used to predict the task identity.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Three setups are considered</p> + + <ul> + <li> + <p>CL1 - Task identity is given to the model.</p> + </li> + <li> + <p>CL2 - Task identity is not given, but task-specific heads are used.</p> + </li> + <li> + <p>CL3 - Task identity needs to be explicitly inferred.</p> + </li> + </ul> + </li> + <li> + <p>On the permuted MNIST task, the proposed approach outperforms baselines like Synaptic Intelligence and Online EWC, and the performance gap is more significant for larger task sequences.</p> + </li> + <li> + <p>Forward knowledge transfer is observed with the CIFAR datasets.</p> + </li> + <li> + <p>One potential limitation (which is more of a limitation of HyperNetworks) is that HyperNetworks may be harder to scale for larger models like ResNet50 or transformers, thus limiting their usefulness for lifelong learning use cases.</p> + </li> +</ul> + + + + + Zero-shot Learning by Generating Task-specific Adapters + + 2021-02-01T00:00:00-05:00 + /site/2021/02/01/Zero-shot Learning by Generating Task-specific Adapters + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces HYPTER - a framework for zero-shot learning (ZSL) in text-to-text transformer models by training a <a href="https://shagunsodhani.com/papers-I-read/HyperNetworks"><strong>Hyp</strong>erNetwork</a> to generate task-specific <a href="https://arxiv.org/abs/1902.00751">adap<strong>ter</strong>s</a> from task descriptions.</p> + </li> + <li> + <p>The focus is on <em>in-task</em> zero-shot learning (e.g., learning to predict an unseen class or relation) and not on <em>cross-task</em> learning (e.g., training on sentiment analysis and evaluating on question-answering task).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2101.00420">Link to the paper</a></p> + </li> +</ul> + +<h2 id="terminology">Terminology</h2> + +<ul> + <li> + <p><em>Task</em> - a NLP task, like classification or question answering.</p> + </li> + <li> + <p><em>Sub-task</em></p> + + <ul> + <li> + <p>A class/relation/question within a task.</p> + </li> + <li> + <p>Denotes by a tuple $(d, D)$ where $d$ is the language description while $D$ represents the subtask’s dataset.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li>Develop ZSL approach for transfer to new subtasks within a task, using the task description available for each subtask.</li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>HYPTER has two main parts:</p> + + <ul> + <li> + <p>Main network</p> + + <ul> + <li> + <p>A pretrained text-to-text network</p> + </li> + <li> + <p>Instantiated as a BERT-Base/Large</p> + </li> + </ul> + </li> + <li> + <p>HyperNetwork</p> + + <ul> + <li>Generates the weights for adapter networks that will be plugged into the main network.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>HyperNetwork has two parts:</p> + + <ul> + <li> + <p>Encoder</p> + + <ul> + <li> + <p>Encodes the task description</p> + </li> + <li> + <p>Instantiated as a RoBERTa-Base model</p> + </li> + </ul> + </li> + <li> + <p>Decoder</p> + + <ul> + <li> + <p>Decodes the encoding into weights for multiple adapters (in parallel)</p> + </li> + <li> + <p>Instantiated as a Feedforward Network</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>The model trains in two phases:</p> + + <ul> + <li> + <p>Main network is trained on all the data by concatenating the task description with the input.</p> + </li> + <li> + <p>Adapters are trained by sampling a task from the train set while keeping the main network frozen.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li>While the idea is very promising and interesting, the evaluation felt quite limited. It uses just two datasets <a href="https://leaderboard.allenai.org/zest/submissions/public">Zero-shot learning from Task Descriptions</a> and <a href="https://eval.ai/web/challenges/challenge-page/689/overview">Zero-shot Relation Extraction</a> and shows some improvements over the baseline of directly finetuning with task descriptions as the prompt.</li> +</ul> + + + + + HyperNetworks + + 2021-01-25T00:00:00-05:00 + /site/2021/01/25/HyperNetworks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper explores HyperNetworks. The idea is to use one network (HyperNetwork) to generate the weights for another network.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1609.09106">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/hardmaru/supercell/blob/master/supercell.py">Author’s implementation</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<h3 id="static-hypernetworks---hypernetworks-for-cnns">Static HyperNetworks - HyperNetworks for CNNs</h3> + +<ul> + <li> + <p>Consider a $D$ layer CNN where the parameters for the $j^{th}$ layer are stored in a matrix $K^j$ of the shape $N_{in}f_{size} \times N_{out}f_{size}$.</p> + </li> + <li> + <p>The HyperNetwork is implemented as a two-layer linear network where the input is a layer embedding $z^j$, and the output is $K^j$.</p> + </li> + <li> + <p>The first layer (of the HyperNetwork) maps the input to $N_{in}$ different outputs using $N_{in}$ weight matrices.</p> + </li> + <li> + <p>The second layer maps the different $N_{in}$ inputs to $K_{i}$ using a shared matrix. The resulting $N_{in}$ (number of) $K_{i}$ matrices are concatenated to obtain $K^j$.</p> + </li> + <li> + <p>As a side note, HyperNetworks have much fewer params than the network for which it produces weights.</p> + </li> + <li> + <p>In a general case, the kernel dimensions (across layers) are not of the same size but integer multiples of some basic sizes. In that case, the HyperNetwork can generate kernels for the basic size, which can be concatenated to form larger kernels. This would require additional input embeddings but not require a change in the architecture of HyperNetwork.</p> + </li> +</ul> + +<h3 id="dynamic-hypernetworks---hypernetworks-for-rnns">Dynamic HyperNetworks - HyperNetworks for RNNs</h3> + +<ul> + <li> + <p>HyperRNNs/HyperLSTMs denote HyperNetworks that generates weights for RNNs/LSTMs.</p> + </li> + <li> + <p>HyperRNNs implement a form of relaxed weight sharing - an alternative to the full weight sharing of the traditional RNNs.</p> + </li> + <li> + <p>At any timestamp $t$, the input to the HyperRNN is the concatenated vector $x_{t}$ (input to the RNN at time $t$) and the hidden state $h_{t-1}$ of the RNN. The output is the weight for the main RNN at timestep $t$.</p> + </li> + <li> + <p>In practice, a <em>weight scaling vector</em> $d$ is used to reduce the memory footprint, which would otherwise be $dim$ times the memory of a standard RNN. $dim$ is the dimensionality of the embedding vector $z_j$.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>HyperNetworks are used to train standard CNNs for MNIST and ResNets for CIFAR 10. In these experiments, HyperNetworks slightly underperform the best performing models but uses much fewer parameters.</p> + </li> + <li> + <p>HyperLSTMs trained on the Penn Treebank dataset and Hutter Prize Wikipedia dataset outperform the stacked LSTMs and perform similar to layer-norm LSTMs. Interestingly, using HyperLSTMs with layer-norm improves performance over HyperLSTMs.</p> + </li> + <li> + <p>Given the similar performance of HyperLSTMs and layer-norm LSTMs, the paper conducted an ablation study to understand if HyperLSTMs learned a weight adjustment policy similar to the statistics-based approach used by layer-norm LSTMs.</p> + + <ul> + <li>However, the analysis of the histogram of the hidden states suggests that using layer-norm reduces the saturation effect while in HyperLSTMs, the cell is saturated most of the time. This indicates that the two models are learning different policies.</li> + </ul> + </li> + <li> + <p>HyperLSTMs are also evaluated for handwriting sequence generation by training in the IAM online handwriting dataset.</p> + + <ul> + <li>While HyperLSTMs are quite effective on this task, combining them with layer-norm degrades the performance.</li> + </ul> + </li> + <li> + <p>On the WMT’14 En-to-Fr machine translation task, HyperLSTMs outperform LSTM based approaches.</p> + </li> +</ul> + + + + + Energy-based Models for Continual Learning + + 2021-01-18T00:00:00-05:00 + /site/2021/01/18/Energy-based Models for Continual Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes to use Energy-based Models (EBMs) for Continual Learning.</p> + </li> + <li> + <p>In classification tasks, the standard approach uses a cross-entropy objective function along with a normalized probability distribution.</p> + </li> + <li> + <p>However, cross-entropy reduces all negative classes’ likelihood when updating the model for a given sample, potentially leading to catastrophic forgetting.</p> + </li> + <li> + <p>Classification can be seen as learning an EBM across separate classes.</p> + </li> + <li> + <p>During an update, the energy for a pair of samples and its ground truth class decreases while the energy corresponding to the pairs of sample and negative classes increases.</p> + </li> + <li> + <p>Unlike the cross-entropy loss, EBMs allow choosing the negative classes to update.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2011.12216">Link to the paper</a></p> + </li> +</ul> + +<h2 id="applications-of-ebms-for-continual-learning">Applications of EBMs for Continual Learning</h2> + +<ul> + <li> + <p>EBMs can be used for class-incremental learning without requiring a replay-buffer or generative model for replay.</p> + </li> + <li> + <p>EBMs can be used for continual learning in setups without task boundaries, i.e., setups where the data distribution can change without a clear separation between tasks.</p> + </li> +</ul> + +<h2 id="ebms">EBMs</h2> + +<ul> + <li> + <p>Boltzman distribution is used to define the conditional likelihood of label $y$, given an input $x$. ie, $p(y|x) = \frac{exp(E(x, y))}{Z(x)}$ where $Z(x) = \sum_{y \in Y}(-E(x, y))$. Here $E$ is the learnt energy function that maps an input-label pair to a scalar energy value.</p> + </li> + <li> + <p>During training, the contrastive divergence loss is used.</p> + </li> + <li> + <p>During inference, the class, for which the input-class pair has the least energy, is selected as the predicted class.</p> + </li> +</ul> + +<h2 id="ebms-for-continual-learning">EBMs for Continual Learning</h2> + +<h3 id="selection-of-negative-samples">Selection of Negative Samples</h3> + +<ul> + <li> + <p>The paper considers several strategies for the selection of negative samples:</p> + + <ul> + <li> + <p>one negative class per sample. The negative class is sampled from the current batch of data. This selection approach performs best.</p> + </li> + <li> + <p>all the negative classes in a batch are used for creating the negative samples.</p> + </li> + <li> + <p>all the classes seen so far in training are used as the negative samples. This approach works the worst in practice.</p> + </li> + </ul> + </li> + <li> + <p>Given the flexibility of sampling the negative classes, EBMs can be used in the boundary-agnostic setups (where the data distribution can change smoothly without an explicit task boundary).</p> + </li> +</ul> + +<h3 id="energy-network">Energy Network</h3> + +<ul> + <li> + <p>EBMs take both the sample and the class as the input. The class can be treated as an attention filter to select the most relevant information between the sample and the class.</p> + </li> + <li> + <p>In theory, EBMs can train for any number of classes without knowing the number of classes beforehand. This is an advantage over the softmax-based approaches, where adding new classes requires changing the size of the softmax output layer.</p> + </li> +</ul> + +<h3 id="inference">Inference</h3> + +<ul> + <li>During inference, all the classes seen so far are evaluated via the energy function. The class, which corresponds to the least energy sample-class pair, is returned as the selected class.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="datasets">Datasets</h3> + +<ul> + <li> + <p>Split MNIST</p> + </li> + <li> + <p>Permuted MNIST</p> + </li> + <li> + <p>CIFAR-10</p> + </li> + <li> + <p>CIFAR-100</p> + </li> +</ul> + +<h3 id="results-in-boundary-aware-setting">Results in Boundary-Aware Setting</h3> + +<ul> + <li> + <p>The paper outperforms the standard continual learning approaches that neither uses a replay-buffer nor a generative model.</p> + </li> + <li> + <p>Additionally, the paper shows that for the same number of parameters, the effective capacity of EMB models is higher than the effective capacity of standard classification models.</p> + </li> + <li> + <p>The paper also shows that standard classification models tend to assign a high probability to new classes for both old and new data. EBMs assign the probability more uniformly (and correctly) across the classes.</p> + </li> + <li> + <p>In an ablation study, the paper shows that both label conditioning and contrastive divergence loss help in improving the performance of EBMs.</p> + </li> +</ul> + +<h3 id="results-in-boundary-agnostic-setting">Results in Boundary-Agnostic Setting</h3> + +<ul> + <li>The performance gains in the boundary-agnostic setting are even more significant than the improvements in the boundary-aware setting.</li> +</ul> + + + + + GPipe - Easy Scaling with Micro-Batch Pipeline Parallelism + + 2021-01-11T00:00:00-05:00 + /site/2021/01/11/GPipe - Easy Scaling with Micro-Batch Pipeline Parallelism + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces GPipe, a pipeline parallelism library for scaling networks that can be expressed as a sequence of layers.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1811.06965">Link to the paper</a></p> + </li> +</ul> + +<h2 id="design">Design</h2> + +<ul> + <li> + <p>Consider training a deep neural network with <em>L</em> layers using <em>K</em> accelerators (say GPUs).</p> + </li> + <li> + <p>Each of the <em>i<sup>th</sup></em> layer has its <em>forward</em> function <em>f<sub>i</sub></em>, <em>backward</em> function <em>b<sub>i</sub></em>, weights <em>w<sub>i</sub></em> and a cost <em>c<sub>i</sub></em> (say the memory footprint or computational time).</p> + </li> + <li> + <p>GPipe partitions this network into <em>K</em> cells and places the <em>i<sup>th</sup></em> cell on the <em>i<sup>th</sup></em> accelerator. Output from the <em>i<sup>th</sup></em> accelerator is passed to the <em>i+1<sup>th</sup></em> accelerator as input.</p> + </li> + <li> + <p>During the forward pass, the input batch (of size <em>N</em>) is divided into <em>M</em> equal micro-batches. These micro-batches are pipelined through the <em>K</em> accelerators one after another.</p> + </li> + <li> + <p>During the backward pass, gradients are computed for each micro-batch. The gradients are accumulated and applied at the end of each minibatch.</p> + </li> + <li> + <p>In batch normalization, the statistics are computed over each micro-batch (used during training) and mini-batch (used during evaluation).</p> + </li> + <li> + <p>Micro-batching improves over the naive mode parallelism approach by reducing the underutilization of resources (due to the network’s sequential dependencies).</p> + </li> +</ul> + +<h2 id="performance-optimization">Performance Optimization</h2> + +<ul> + <li> + <p>GPipe supports re-materialization (or checkpointing), i.e., during the forward pass, only the output activations (at partition boundaries) are stored.</p> + </li> + <li> + <p>During backward pass, the forward function is recomputed at each accelerator. This trades off the memory requirement with increased time.</p> + </li> + <li> + <p>One potential downside is that partitioning can introduce some idle time per accelerator (referred to as the bubble overhead). However, with a sufficiently large number of micro-batches ( more than 4 times the number of partitions), the bubble overhead is negligible.</p> + </li> +</ul> + +<h2 id="performance-analysis">Performance Analysis</h2> + +<ul> + <li> + <p>Two different types of model architectures are compared: AmoebaNet convolutional model and Transformer sequence-to-sequence model.</p> + </li> + <li> + <p>For AmoebaNet, the size of the largest trainable model (on a single 8GB Cloud TPU v2) increases from 82M to 318M. Further, a 1.8 billion parameter model can be trained on 8 accelerators (25x improvement in size using GPipe).</p> + </li> + <li> + <p>For transformers, GPipe scales the model size to 83.9 B parameters with 128 partitions (298x improvement in size compared to a single accelerator).</p> + </li> + <li> + <p>Since the computation is evenly distributed across transformer layers, the training throughput scales almost linearly with the number of devices.</p> + </li> + <li> + <p>Quantitative experiments on ImageNet and multilingual machine translation show that models can be effectively trained using GPipe.</p> + </li> +</ul> + + + + + Compositional Explanations of Neurons + + 2021-01-04T00:00:00-05:00 + /site/2021/01/04/Compositional Explanations of Neurons + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes a method to explain/interpret the representations learned by individual neurons in deep neural networks.</p> + </li> + <li> + <p>The explanations are generated by searching for logical forms defined by a set of composition operators (like OR, AND, NOT) over primitive concepts (like water).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2006.14032">Link to the paper</a></p> + </li> +</ul> + +<h2 id="generating-compositional-explanations">Generating compositional explanations</h2> + +<ul> + <li> + <p>Given a neural network <em>f</em>, the goal is to explain a neuron’s behavior (of this network) in human-understandable terms.</p> + </li> + <li> + <p><a href="http://netdissect.csail.mit.edu/">Previous work</a> builds on the idea that a good explanation is a description that identifies the inputs for which the neuron activates.</p> + </li> + <li> + <p>Given a set of pre-defined atomic concepts $c \in C$ and a similarity measure $\delta(n, c)$ where $n$ represents the activation of the $n^{th}$ neuron, the explanation, for the $n^{th}$ neuron, is the concept most similar to $n$.</p> + </li> + <li> + <p>For images, a concept could be represented as an image segmentation map. For example, the water concept can be represented by the segments of the images that show water.</p> + </li> + <li> + <p>The similarity can be measured by first thresholding the neuron activations (to get a neuron mask) and then computing the IoU score (or Jaccard Similarity) between the neuron mask and the concept.</p> + </li> + <li> + <p>One limitation of this approach is that the explanations are restricted to pre-defined concepts.</p> + </li> + <li> + <p>The paper expands the set of candidate concepts by considering the logical forms of the atomics concepts.</p> + </li> + <li> + <p>In theory, the search space would explode exponentially. In practice, it is restricted to explanations with at most $N$ atomics concepts, and beam search is performed (instead of exhaustive search).</p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p><strong>Image Classification Setup</strong></p> + + <ul> + <li> + <p>Neurons from the final 512-unit convolutional layer of a ResNet-18 trained on the <a href="https://ieeexplore.ieee.org/abstract/document/7968387">Places365 dataset</a>.</p> + </li> + <li> + <p>Probing for concepts from <a href="https://openaccess.thecvf.com/content_cvpr_2017/html/Zhou_Scene_Parsing_Through_CVPR_2017_paper.html">ADE20k scenes dataset</a> with atomic concepts defined by annotations in the <a href="http://netdissect.csail.mit.edu/">Broden dataset</a></p> + </li> + </ul> + </li> + <li> + <p><strong>NLI Setup</strong></p> + + <ul> + <li> + <p>BiLSTM baseline followed by MLP layers trained on <a href="https://nlp.stanford.edu/projects/snli/">Stanford Natural Language Inference (SNLI) corpus</a>.</p> + </li> + <li> + <p>Probing the penultimate hidden layer (of the MLP component) for sentence-level explanations.</p> + </li> + <li> + <p>Concepts are created using the 2000 most common words in the validation split of the SNLI dataset.</p> + </li> + <li> + <p>Additional concepts are created based on the lexical overlap between premise and hypothesis.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="do-neurons-learn-compositional-concepts">Do neurons learn compositional concepts</h2> + +<ul> + <li> + <p><strong>Image Classification Setup</strong></p> + + <ul> + <li> + <p>As $N$ increases, the mean IoU increases (i.e., the explanation quality increases) though the returns become diminishing beyond $N=10$.</p> + </li> + <li> + <p>Manual inspection of 128 neurons and their length 10 explanations show that 69% neurons learned some meaningful combination of concepts, while 31% learned some unrelated concepts.</p> + </li> + <li> + <p>The meaningful combination of concepts include:</p> + + <ul> + <li> + <p>perceptual abstraction that is also lexically coherent (e.g., “skyscraper OR lighthouse OR water tower”).</p> + </li> + <li> + <p>perceptual abstraction that is not lexically coherent (e.g., “cradle OR autobus OR fire escape”).</p> + </li> + <li> + <p>specialized abstraction of the form L1 AND NOT L2 (e.g. (water OR river) AND NOT blue).</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p><strong>NLI Setup</strong></p> + + <ul> + <li> + <p>As $N$ increases, the mean IoU increases (as in the image classification setup) though the IoU keeps increasing past $N=30$.</p> + </li> + <li> + <p>Many neurons correspond to lexical features. For example, some neurons are gender-sensitive or activate for verbs like sitting, eating or sleeping. Some neurons are activated when the lexical overlap between premise and hypothesis is high.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="do-interpretable-neurons-contribute-to-model-accuracy">Do interpretable neurons contribute to model accuracy?</h2> + +<ul> + <li> + <p>In image classification setup, the more interpretable the neuron is, the more accurate is the model (when the neuron is active).</p> + </li> + <li> + <p>However, the opposite trend is seen in NLI models. i.e., the more interpretable neurons are less accurate.</p> + </li> + <li> + <p>Key takeaway - interpretability (as measured by the paper) is not correlated with performance. Given a concept space, the identified behaviors may be correlated or anti-correlated with the model’s performance.</p> + </li> +</ul> + +<h2 id="targeting-explanations-to-change-model-behavior">Targeting explanations to change model behavior</h2> + +<ul> + <li> + <p>The idea is to construct examples that activate (or inhibit) certain neurons, causing a change in the model’s predictions.</p> + </li> + <li> + <p>These adversarial examples are referred to as “copy-paste” adversarial examples.</p> + </li> + <li> + <p>For example, the neuron corresponding to “(water OR river) AND (NOT blue)” is a major contributor for detecting “swimming hole” classes. An adversarial example is created by making the water blue. This prompts the model to predict “grotto” instead of “swimming hole.”</p> + </li> + <li> + <p>Similarly, in the NLI model, a neuron detects the word “nobody” in the hypothesis as highly indicative of contradiction. An adversarial example can be created by adding the word “nobody” to the hypothesis, prompting the model to predict contradiction while the true label should be neutral.</p> + </li> + <li> + <p>These observations support the hypothesis that one can use explanations to create adversarial examples.</p> + </li> +</ul> + + + + + Design patterns for container-based distributed systems + + 2020-12-21T00:00:00-05:00 + /site/2020/12/21/Design patterns for container-based distributed systems + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper describes three design patterns for container-based distributed systems: single-container pattern, single-node pattern, and multi-node pattern.</li> + <li><a href="https://www.usenix.org/conference/hotcloud16/workshop-program/presentation/burns">Link to the paper</a></li> +</ul> + +<h2 id="single-container-management-patterns">Single-container management patterns</h2> + +<ul> + <li>Traditionally, containers have exposed three functions: run, pause and stop.</li> + <li>A richer API can be implemented to provide fine-grained control to system developers and operators.</li> + <li>Similarly, much more application information (including monitoring metrics) can be exposed.</li> + <li>The container interface can be used to define a contract for a complex lifecycle. For example, instead of arbitrarily shutting down the container, the system could signal that it will be terminated, giving the container some time to perform some cleanup/follow-up actions.</li> +</ul> + +<h2 id="single-node-multi-container-pattern">Single-node, multi-container pattern</h2> + +<h3 id="sidecar-pattern">Sidecar pattern</h3> + +<ul> + <li>Multiple containers extend and enhance the main container.</li> + <li>For example, a web-server serves from the local disk (main container) while a side container updates the data.</li> + <li>Benefits: + <ul> + <li>independent development, deployment, and scaling of containers</li> + <li>possibility of combining different type of containers</li> + <li>failure containment boundary, i.e., one failing container, need not bring down the entire system.</li> + </ul> + </li> +</ul> + +<h3 id="ambassador-pattern">Ambassador pattern</h3> + +<ul> + <li>Proxy communication to and from the main container with the ambassador hiding the complexities of communication with a distributed (multi-shard system) that may be written in a different language.</li> +</ul> + +<h3 id="adapter-pattern">Adapter pattern</h3> + +<ul> + <li>Standardize output and interfaces across the containers to provide a simple, homogenized view to external applications.</li> + <li>A common example is using a single tool for collecting/processing metrics from multiple applications.</li> + <li>This is different from the adapter pattern, which aims to provide a simplified view of the external world to an application.</li> +</ul> + +<h2 id="multi-node-application-patterns">Multi-node application patterns</h2> + +<h3 id="leader-election-pattern">Leader election pattern</h3> + +<ul> + <li>In a sharded (or replication-based) system, the system may have to elect a leader (or multiple leaders) among the replicas (or shards).</li> + <li>Instead of using a leader election library, a leader election container can be used (that communicates with other containers over, say, HTTP). This removes the restriction of using a leader election library compatible with the containers (e.g., using the same language).</li> +</ul> + +<h3 id="work-queue-pattern">Work queue pattern</h3> + +<ul> + <li>A work coordinator container can queue different containers, each of which may have a different implementation or dependencies, thus removing the restriction that all the works use the same runtime.</li> +</ul> + +<h3 id="scattergather-pattern">Scatter/gather pattern</h3> + +<ul> + <li>An external client sends a request to a root container.</li> + <li>This container fans out the request to many containers that may perform the computation in parallel.</li> + <li>The root container gathers these parallel computations’ results and aggregates them into a response to the external client.</li> +</ul> + + + + + Cassandra - a decentralized structured storage system + + 2020-12-14T00:00:00-05:00 + /site/2020/12/14/Cassandra - a decentralized structured storage system + <h2 id="introduction">Introduction</h2> + +<ul> + <li>Cassandra is a distributed storage system that runs over cheap commodity servers and handles high write throughput while maintaining low latency for read operations.</li> + <li>At the time of writing, it was used to support the search for Facebook Inbox.</li> + <li><a href="https://dl.acm.org/doi/10.1145/1773912.1773922">Link to the paper</a></li> + <li><a href="https://cassandra.apache.org/">Link to the implementation</a></li> +</ul> + +<h2 id="data-model">Data Model</h2> + +<ul> + <li>A table is a distributed multidimensional map.</li> + <li>The key is a string (generally 16-36 bytes long), while the value is a structured object.</li> + <li>Every operation under a single row key is atomic per replica.</li> + <li>Columns are grouped together into sets called column families.</li> + <li>There are two types of columns families: + <ul> + <li>Simple families.</li> + <li>Super column families: visualized as a column family within a column family.</li> + </ul> + </li> + <li>Columns can be sorted by name or time (used to display results in time sorted order).</li> + <li>The API supports insert, get and delete operations.</li> +</ul> + +<h2 id="system-architecture">System Architecture</h2> + +<h3 id="handling-requests">Handling Requests</h3> + +<ul> + <li>Any read/write request gets routed to any node in the cluster. The node determines the replicas for a given key and routes the request.</li> + <li>For write query, the system waits for a quorum of replicas to acknowledge the writes’ completion.</li> + <li>For read query, the system either routes the requests to the closest replica (might fetch stale results) or routes the requests to all replicas and waits for a quorum of responses.</li> +</ul> + +<h3 id="partitioning">Partitioning</h3> + +<ul> + <li>Cassandra partitions data across the cluster using consistent hashing with an order-preserving hash function.</li> + <li>The hash function’s output range is treated as a fixed circular ring, and each node is assigned a random position on the ring.</li> + <li>An incoming request specifies a key used to route requests.</li> + <li>One benefit of this approach is that the addition/removal of a node only affects its immediate neighbors.</li> + <li>However, randomly assigning nodes leads to non-uniform data and load distribution.</li> + <li>Cassandra uses the load information and moves lightly loaded nodes to reduce the load on other nodes.</li> +</ul> + +<h3 id="replication">Replication</h3> + +<ul> + <li>Each data item is replicated at N hosts, where N is the per-instance replication factor.</li> + <li>Cassandra supports the following replication policies: Rack Unaware, Rack Aware (within a datacenter), and Datacenter Aware.</li> + <li>For “Rack Aware” and “Datacenter Aware” strategies, Zookeeper elects a leader among the nodes and holds metadata about which range a node is responsible for.</li> + <li>In case of node failure and network partitions, the quorum requirements are relaxed.</li> +</ul> + +<h3 id="membership">Membership</h3> + +<ul> + <li>Cluster membership is based on Scuttlebutt, a very efficient anti-entropy Gossip based mechanism.</li> + <li>Cassandra uses a modified version of $\phi$ Accrual Failure Detector for detecting failures, which provides the suspicion level (of failure) for each node.</li> +</ul> + +<h3 id="bootstrapping">Bootstrapping</h3> + +<ul> + <li>A node, starting for the first time, chooses a random position in the ring.</li> + <li>This information is persisted on the local disk, on Zookeeper, and gossiped around the cluster (so any node can route any query in the cluster).</li> + <li>During bootstrapping, the newly joined node reads a list of contact points (within the cluster) using a configuration file.</li> +</ul> + +<h3 id="local-persistence">Local Persistence</h3> + +<ul> + <li>Generally, a write operation involves a write into a commit log (for durability and recoverability), followed by a write into the in-memory data structures.</li> + <li>A read operation starts with querying the in-memory data and then looks into the filesystem.</li> + <li>Read queries on the filesystem use bloom filters.</li> + <li>Column indices are maintained to make it faster to look up relevant columns.</li> +</ul> + +<h2 id="implementation-details">Implementation Details</h2> + +<ul> + <li>Components implemented in Java.</li> + <li>System control messages use UDP while messages for replication and request routing uses TCP.</li> + <li>A new commit log is rolled out after the older one exceeds 128MB of size.</li> + <li>All the data is indexed using a primary key.</li> + <li>Data on the disk is chunked into sequences of blocks. Each block contains at most 128 keys and is demarcated by a block index.</li> + <li>When the data is written to the disk, a block index is generated and maintained in the memory for faster access.</li> + <li>A compaction process is performed to merge multiple files (on disk) into one file.</li> +</ul> + +<h2 id="practical-experience">Practical Experience</h2> + +<ul> + <li>Data from MySQL servers is added to Cassandra using MapReduce processes.</li> + <li>Although Cassandra is a completely decentralized system, adding some coordination (via Zookeeper) is helpful.</li> + <li>For Inbox Search, a per-user index is maintained for all the messages.</li> + <li>For “term search”, the key is the userid, and the words in the message become the super column.</li> + <li>For searching all the messages ever sent/received by a user, the key is the userid, and the recipient ids are the super columns.</li> +</ul> + + + + + CAP twelve years later - How the rules have changed + + 2020-12-07T00:00:00-05:00 + /site/2020/12/07/CAP twelve years later - How the rules have changed + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The CAP theorem states that any system sharing data over the network can only have at most two (out of three) desirable properties:</p> + + <ul> + <li> + <p>consistency (C), i.e., a single, up-to-date copy of the data;</p> + </li> + <li> + <p>high availability (A) of that data (for updates); and</p> + </li> + <li> + <p>tolerance to network partitions (P).</p> + </li> + </ul> + </li> + <li> + <p>This “2 of 3” formulation is misleading as it oversimplifies the interplay between properties.</p> + </li> + <li> + <p><a href="https://ieeexplore.ieee.org/abstract/document/6133253">Link to the paper</a></p> + </li> +</ul> + +<h2 id="acid-vs-base">ACID vs. BASE</h2> + +<ul> + <li> + <p>ACID is a design philosophy that focuses on consistency as reflected in the traditional relational databases.</p> + </li> + <li> + <p>The four properties in ACID are:</p> + + <ul> + <li> + <p>Atomicity (A), i.e., the operations are atomic, and either the entire operation succeeds or none of it succeeds.</p> + </li> + <li> + <p>Consistency (C), i.e., a transaction preserves all the rules. Note that the consistency in CAP is a subset of consistency in ACID.</p> + </li> + <li> + <p>Isolation (I), i.e., transactions occur in isolation and do not affect each other.</p> + </li> + <li> + <p>Durability (D), i.e., the transactions are durable irrespective of system failure.</p> + </li> + </ul> + </li> + <li> + <p>BASE is an alternate design philosophy that focuses on availability as reflected in the NoSQL databases.</p> + </li> + <li> + <p>The four properties in BASE are:</p> + + <ul> + <li> + <p>Basic Availability (BA), i.e., the database appears to work most of the time.</p> + </li> + <li> + <p>Soft state (S), i.e., the system’s state can change over time as it becomes eventually consistent.</p> + </li> + <li> + <p>Eventual consistency (E), i.e., the system will eventually become consistent over time.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="cap-confusion">CAP confusion</h2> + +<ul> + <li> + <p>Generally, partitionability is seen as a must-have, thus reducing the choice to be between availability and consistency.</p> + </li> + <li> + <p>This view is somewhat misleading because the choice between C, A, and P is not binary but granular.</p> + </li> + <li> + <p>The choice between C and A can occur at various granularity levels, and different components (of a larger system) can prioritize different aspects.</p> + </li> + <li> + <p>Similarly, the CAP theorem generally ignores latency even though it is closely related to partitionability. For example, failing to achieve consistency within a time-bound (i.e., latency) implies a partition.</p> + </li> + <li> + <p>In general, there is no global notion of partition - some subset of nodes may experience a partition, and others may not.</p> + </li> + <li> + <p>Once a partition is detected, the system can then choose between C and A.</p> + </li> +</ul> + +<h2 id="managing-partitions">Managing Partitions</h2> + +<ul> + <li> + <p>Three-step process for managing partitions:</p> + + <ul> + <li> + <p>Detect the start of a partition.</p> + </li> + <li> + <p>Enter an explicit partition mode that may limit some operations.</p> + + <ul> + <li> + <p>Possible strategies:</p> + + <ul> + <li> + <p>Reduce availability by limiting some operations.</p> + </li> + <li> + <p>Record extra information that can be used during partition recovery.</p> + </li> + </ul> + </li> + <li> + <p>The strategy depends on the invariants that the system should maintain.</p> + </li> + <li> + <p>For example, if the invariant is that the keys (in a table) should be unique, the system could allow duplicate keys for some time and perform a de-duplication step during partition recovery.</p> + </li> + <li> + <p>A counterexample is a monetary transaction (e.g., charging a credit card). In such cases, the system could disable the operation and record it for performing later. Sometimes this “unavailability” is not visible to the user.</p> + </li> + <li> + <p>History of operations (over replicas across different partitions) can be tracked using version vectors of the form (node, logical time). The system can easily recreate the order in which they were executed (or mark them as being concurrent).</p> + </li> + </ul> + </li> + <li> + <p>Initiate partition recovery when communication is restored and make the state across the partitions consistent.</p> + </li> + <li> + <p>One common approach is to revert to the state when the partition was detected and apply the operations consistently across all the replicas.</p> + </li> + <li> + <p>This may require some extra effort to merge conflicts.</p> + </li> + <li> + <p>One workaround can be to constrain the use of certain operations so that the system does not encounter merge conflicts during recovery.</p> + </li> + <li> + <p>Sometimes, certain invariants may be violated when the system is in the partition mode and needs to be fixed during recovery.</p> + </li> + <li> + <p>The key takeaway is that when partitions exist, the choice between availability and consistency is not binary, and both can be optimized for.</p> + </li> + </ul> + </li> +</ul> + + + + + Consistency Tradeoffs in Modern Distributed Database System Design + + 2020-11-30T00:00:00-05:00 + /site/2020/11/30/Consistency Tradeoffs in Modern Distributed Database System Design + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>CAP theorem has been influential in the design decisions for distributed databases.</p> + </li> + <li> + <p>However, designers incorrectly assume that the CAP theorem “always” imposes restrictions in terms of the tradeoff between availability and consistency. In contrast, the tradeoff is applicable only in the case of partitions.</p> + </li> + <li> + <p>CAP theorem led to the development of highly available systems with reduced consistency models (and reduced ACID guarantees).</p> + </li> + <li> + <p>Another tradeoff - between latency and consistency - has also been influential for database design.</p> + </li> + <li> + <p>The paper unifies CAP and latency-consistency tradeoffs into a single formulation called PACELC.</p> + </li> + <li> + <p>Note that some of the observations, especially ones about the databases, may be outdated now (the paper was written in 2012). However, the core message is still relevant.</p> + </li> + <li> + <p><a href="https://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="latency-consistency-tradeoff">Latency-Consistency Tradeoff</h2> + +<ul> + <li> + <p>Low latency (or high availability) means that the system must replicate data.</p> + </li> + <li> + <p>In case of an update query, three possibilities arise:</p> + + <ul> + <li> + <p>The system can choose to send data updates to all the replicas at once. This leads to two possibilities:</p> + + <ul> + <li> + <p>A replica can receive the update queries in an arbitrary order, thus breaking consistency with other replicas.</p> + </li> + <li> + <p>Alternatively, the replicas could use some protocol to agree on the order of updates. However, this can introduce latency.</p> + </li> + </ul> + </li> + <li> + <p>The update queries can be first sent to a master replica.</p> + + <ul> + <li> + <p>The master replica can apply the updates and send them to the other replicas using one of the following strategies:</p> + + <ul> + <li> + <p>Synchronous replication where the master waits for all the updates to be applied to a replica(s). However, this approach introduces latency.</p> + </li> + <li> + <p>Asynchronous replication where the master assumes the update to be complete before it completes. In this case, the latency-consistency tradeoff depends on how read queries are handled:</p> + + <ul> + <li> + <p>The system can send all read queries to the master. In this case, there are no consistency issues, but additional latency is introduced because all the read queries go to the same replica, thus potentially overloading it.</p> + </li> + <li> + <p>Alternatively, the read query can be served from any replica. While this improves read latency, the results can be inconsistent now.</p> + </li> + </ul> + </li> + <li> + <p>Use a mix of Synchronous and Asynchronous replication - i.e., some of the write queries are Synchronous, and others are Asynchronous. In this case, the latency-consistency tradeoff depends on how read queries are handled:</p> + + <ul> + <li> + <p>If the read is routed to at least one replica that has been Synchrnously updated, the consistency can be preserved, with additional latency for discovering the updated replica, etc.</p> + </li> + <li> + <p>If the read query can not be routed to an updated replica (maybe because none of the replicas is updated), then either latency suffers or inconsistent read can be performed.</p> + </li> + </ul> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>The update query is first sent to an arbitrary replica.</p> + + <ul> + <li>This is the same as the previous case, with the query going to an arbitrary replica instead of the master replica, and suffers from the same latency issues as the last case.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>In a nutshell, the tradeoff between latency and consistency is always present, irrespective of network failure.</p> + </li> + <li> + <p>This contrasts with the CAP theorem, which imposes the tradeoff between availability and consistency only in the case of a network partition.</p> + </li> +</ul> + +<h2 id="pacelc">PACELC</h2> + +<ul> + <li> + <p>If there is a partition (P), how does the system tradeoff availability (A) and consistency (C); else (E), when the system is running without failures, how does the system tradeoff latency (L) and consistency (C)?</p> + </li> + <li> + <p>The latency-consistency tradeoff (ELC) is relevant only when the data is replicated.</p> + </li> + <li> + <p>Default versions of Dynamo, Cassandra, and Riak were PA/EL systems, i.e., if a partition occurs, availability is prioritized. In the absence of partition, lower latency is prioritized.</p> + </li> + <li> + <p>Fully ACID systems (VoltDB, H-Store, and Megastore) and others like BigTable and HB are PC/EC, i.e., they prioritize consistency and give up availability and latency.</p> + </li> + <li> + <p>MongoDB can be classified as a PA/EC system, while PNUTS is a PC/EL system.</p> + </li> +</ul> + + + + + Exploring Simple Siamese Representation Learning + + 2020-11-23T00:00:00-05:00 + /site/2020/11/23/Exploring Simple Siamese Representation Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper shows that Siamese networks can be used for unsupervised learning with images without needing techniques like negative sample pairs, large batch training, or momentum encoders. The training mechanism is referred to as the SimSiam method.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2011.10566">Link to the paper</a></p> + </li> +</ul> + +<h2 id="method">Method</h2> + +<ul> + <li> + <p>Given an input image <em>x</em>, create two augmented views <em>x1</em> and <em>x2</em>.</p> + </li> + <li> + <p>These views are processed by an encoder network <em>f</em>.</p> + </li> + <li> + <p>One of the views (say <em>x1</em>) is processed by the encoder <em>f</em> as well as a predictor MLP <em>h</em> to obtain a projection <em>p1</em> ie <em>p1 = h(f(x1))</em>.</p> + </li> + <li> + <p>The second view (<em>x2</em>) is processed only by the encoder <em>f</em> to obtain an encoding <em>z2</em> i.e., <em>z2 = f(x2)</em>.</p> + </li> + <li> + <p>Negative cosine similarity is minimized between <em>p1</em> and <em>z2</em> with the catch that the resulting gradients are not used to update the encoder via <em>z2</em>. I.e., Loss = <em>D(p1, stopgrad(z2))</em> where <em>D</em> is the negative cosine similarity and <em>stopgrad</em> is an operation that stops the flow of gradients.</p> + </li> + <li> + <p>In practice, both <em>p1, z2</em> and <em>p2, z1</em> pairs are used for computing the loss. ie Loss = <em>0.5 * (D(p1, stopgrad(z2)) + D(p2, stopgrad(z1)))</em>.</p> + </li> +</ul> + +<h2 id="implementation-details">Implementation Details</h2> + +<ul> + <li> + <p>Encoder uses batch norm in all the layers (including output) while projection MLP uses batch norm only in the hidden layers.</p> + </li> + <li> + <p>SGD optimizer with learning rate as <em>0.05 * batchsize / 256</em>, cosine learning rate decay schedule and SGD momentum = 0.9.</p> + </li> + <li> + <p>Unsupervised pretraining on the ImageNet dataset followed by training a supervised linear classifier on the frozen representations.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Stop-gradient operation is necessary to avoid a degenerate solution. Without stop-gradient, the model maps all inputs to a constant <em>z</em>.</p> + </li> + <li> + <p>If the projection layer is removed, the method does not work (because of the loss’s symmetric nature). If the loss is also made asymmetric, the method still does not work without the projection layer. However, asymmetric loss + projection layer works.</p> + </li> + <li> + <p>Keeping the projection layer fixed (i.e., not updating during training) avoids collapse but leads to poor validation performance.</p> + </li> + <li> + <p>Training the projection layer with a constant learning rate works better in practice, likely because the projection layer needs to keep adapting before the encoder layer is sufficiently trained.</p> + </li> + <li> + <p>The method works well across different batch sizes.</p> + </li> + <li> + <p>Removing batch norm layers from all the layers in all the networks does not lead to collapse, though the model’s performance degrades on the validation dataset. Adding batch norm to the hidden layers alone is sufficient.</p> + </li> + <li> + <p>Adding batch norm to the encoder’s output further improves the performance but adding batch norm to all the layers of all the networks makes the training unstable, with the loss oscillating.</p> + </li> + <li> + <p>Overall, while batch norm helps to improve performance, it is not sufficient to avoid collapse.</p> + </li> + <li> + <p>The setup does not collapse when the cross-entropy loss replaces the cosine loss.</p> + </li> +</ul> + +<h2 id="what-is-simsiam-solving">What is SimSiam solving?</h2> + +<ul> + <li> + <p>Given that the stop-gradient operation seems to be the critical ingredient for avoiding collapse, the paper hypothesizes that SimSiam is solving a different optimization problem.</p> + </li> + <li> + <p>The hypothesis is that SimSiam is implementing an Expectation-Maximisation (EM) algorithm with two sets of variables and two underlying sub-problems.</p> + </li> + <li> + <p>The paper performs several experiments to test this hypothesis. For example, they consider <em>k</em> SGD steps for the first problem before performing an update for the second problem, showing that the alternating optimization is a valid formulation, of which SimSiam is a particular case.</p> + </li> +</ul> + +<h2 id="comparison-to-other-methods">Comparison to other methods</h2> + +<ul> + <li> + <p>SimSiam achieves the highest accuracy among SimCLR, MoCo, BYOL, and SwAV for training under 100 epochs. However, it lags behind other methods when trained longer.</p> + </li> + <li> + <p>SimSiam’s representations are transferable beyond the ImageNet tasks.</p> + </li> + <li> + <p>Adding projection layer and stop-gradient operator to SimCLR does not improve its performance.</p> + </li> +</ul> + + + + + Data Management for Internet-Scale Single-Sign-On + + 2020-11-16T00:00:00-05:00 + /site/2020/11/16/Data Management for Internet-Scale Single-Sign-On + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes the architecture of an erstwhile single-sign-on (SSO) service used by Google, called Google Accounts (2006).</p> + </li> + <li> + <p>Note that some of the metrics and design decisions may be outdated now (the paper was written in 2006). However, the core message is still relevant.</p> + </li> + <li> + <p><a href="https://www.usenix.org/legacy/event/worlds06/tech/prelim_papers/perl/perl.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="operational-constraints">Operational Constraints</h2> + +<ul> + <li> + <p>SSO’s availability affects the availability of all applications that require user sign-in.</p> + </li> + <li> + <p>Generally, systems can achieve high availability by sacrificing consistency, but given the nature of SSO (matching username/passwords), providing an inconsistent view is not a good option, and single-copy consistency is a usability requirement.</p> + </li> +</ul> + +<h2 id="berkeley-db">Berkeley DB</h2> + +<ul> + <li> + <p>Berkeley DB is an embedded, high-performance, scalable, transactional storage system for key-value data and provides both keyed and sequential lookup.</p> + </li> + <li> + <p>It provides a primary copy replication model with a single writer (called master) and multiple read-only replicas.</p> + </li> + <li> + <p>All writes are sent to the master, which first applies the changes and then propagates them to the replicas.</p> + </li> + <li> + <p>The master and the replicas have identical logs, and in case of master failure, a new master is elected from the replicas.</p> + </li> + <li> + <p>Some synchronization may be needed between the replicas in case, e.g., the master dies in between a transaction.</p> + </li> +</ul> + +<h2 id="sso-architecture">SSO Architecture</h2> + +<ul> + <li> + <p>SSO service maps usernames to user account data and services to service-specific data.</p> + </li> + <li> + <p>The SSO database is partitioned into shards, where each shard is a replicated Berkeley DB (having 5 to 15 replicas).</p> + </li> + <li> + <p>Each replica stores the data in a B+-link tree data structure.</p> + </li> + <li> + <p>Consistent reads must go to the master, while non-master replicas can serve “ stale” reads.</p> + </li> + <li> + <p>In the case of larger replication groups (say 15 replicas), only a subset of replicas can become master (“electable replicas”).</p> + </li> + <li> + <p>In general, replicas are spread geographically to handle machine-failure, network-failure, and data center-failure.</p> + </li> + <li> + <p>Replicas in a share are kept close to reduce the communication latency, which affects the time to commit a write operation or electing a new master.</p> + </li> + <li> + <p>Some of the shards implement ID-map, i.e., map of username to userid and userid to shards.</p> + </li> +</ul> + +<h2 id="database-integration">Database Integration</h2> + +<ul> + <li>Berkeley DB leaves decisions regarding quorums, leases, etc., up to the application.</li> +</ul> + +<h3 id="quorums">Quorums</h3> + +<ul> + <li> + <p>SSO chooses a quorum protocol that guarantees that updates are never lost.</p> + </li> + <li> + <p>For the write queries, the master waits for a positive acknowledgment from a majority of the replicas, including itself, before marking the query as completed.</p> + </li> + <li> + <p>When selecting a new leader, SSO requires a majority of replicas to agree. Moreover, Berkeley DB elections always choose a replica with the latest log entry during an election, thus guaranteeing that the new master’s log will include all the previous master’s updates.</p> + </li> +</ul> + +<h3 id="leases">Leases</h3> + +<ul> + <li> + <p>The master holds a <em>master lease</em> when responding to read queries and refreshes this lease periodically by communicating with a majority of replicas.</p> + </li> + <li> + <p>The lease guarantees that the master is not returning stale data if a partition or failure causes the master to lose its mastership, i.e., holding the lease guarantees that the master is still the master.</p> + </li> + <li> + <p>Moreover, elections can not be completed within the lease timeout interval.</p> + </li> +</ul> + +<h3 id="replica-group-membership">Replica Group Membership</h3> + +<ul> + <li> + <p>SSO maintains a replica configuration containing the logical (DNS) name and IP address of each replica.</p> + </li> + <li> + <p>In case of any changes to the configuration, the changes are specified in a file that the master reads periodically.</p> + </li> + <li> + <p>If the configuration changes, the master initiates a configuration change and update the database.</p> + </li> + <li> + <p>Non-master replicas can get the new configuration from the database.</p> + </li> + <li> + <p>A new replica or a replica that lost state (say due to a failure) starts as a non-voting replica and can not participate in an election till it has caught up with the master as of the time the replica joined (again).</p> + </li> +</ul> + + + + + Searching for Build Debt - Experiences Managing Technical Debt at Google + + 2020-11-09T00:00:00-05:00 + /site/2020/11/09/Searching for Build Debt - Experiences Managing Technical Debt at Google + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes the efforts to control and repay the technical debt in the build system at Google (called the Build Debt).</p> + </li> + <li> + <p>Guiding Principles:</p> + + <ul> + <li> + <p>Automate techniques to analyze and fix issues that contribute to technical debt.</p> + </li> + <li> + <p>Make it easier to do the right thing as developers can incur technical debt unknowingly.</p> + </li> + <li> + <p>Make it hard to do the wrong thing, e.g., by building stricter checks into the build process.</p> + </li> + </ul> + </li> + <li> + <p>Note that some of the metrics and design decisions may be outdated now (the paper was written in 2012). However, the core message is still relevant.</p> + </li> + <li> + <p><a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37755.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="googles-build-system-debt">Google’s Build System Debt</h2> + +<ul> + <li> + <p>BUILD files encapsulate the specifications for building software.</p> + </li> + <li> + <p>Generally, these files are maintained manually, and the dependencies may not be up-to-date over time.</p> + </li> + <li> + <p>In extreme cases, some of the build targets are not built for months. Such targets are called zombie targets.</p> + </li> + <li> + <p>Originally, any project could depend on any other project’s internal details, thus creating (sometimes unwanted) couplings.</p> + </li> + <li> + <p>If the lower-level project did not intend to expose some internal details, the unwanted couplings introduce technical debt and make it harder to modify the lower-level project.</p> + </li> + <li> + <p>One form of technical debt is the visibility debt or the cost of back-fitting visibility rules onto the existing build specifications to re-establish the appropriate encapsulations.</p> + </li> + <li> + <p>Another example of technical debt is dead code that can confuse the developers looking for useful APIs.</p> + </li> +</ul> + +<h2 id="dependency-debt">Dependency Debt</h2> + +<ul> + <li> + <p><em>Over-declared</em> or <em>underutilized</em> dependencies can slow the build and testing of systems.</p> + </li> + <li> + <p><em>Under-declared</em> dependencies can make the build process brittle and make it difficult to remove <em>over-declared</em> dependencies.</p> + </li> + <li> + <p>Potential solutions for <em>over-declared</em> dependencies include:</p> + + <ul> + <li> + <p>Setting aside some dedicated time for fixing build rules. But this approach is not automated, and potential breakages make it harder for developers to do the right thing.</p> + </li> + <li> + <p>Automatically add all the <em>under-declared</em> dependencies to the BUILD files. The system can raise an error if a direct dependency is missing, making it harder to do the wrong thing.</p> + </li> + <li> + <p>Automation can be applied for finding/reporting the over-declared dependencies as well.</p> + </li> + </ul> + </li> + <li> + <p>Potential solutions for <em>underutilized</em> dependencies include:</p> + + <ul> + <li> + <p>While it is challenging to automate fixing <em>underutilized</em> dependencies, automating the discovery of such dependencies is still useful.</p> + </li> + <li> + <p>Highlighting dependencies with high cost and low removal effort could incentivize developers to clean up their projects.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="zombie-targets">Zombie Targets</h2> + +<ul> + <li> + <p>Zombie targets can be identified by query the results of build and test runs.</p> + </li> + <li> + <p>A target is marked as “dead” if the attempts to build it have failed for at least 90 days. Until then, build errors are considered to be transient.</p> + </li> + <li> + <p>A zombie target can be eliminated by deleting its definition from the BUILD and deleting the source files, which are reachable only via the zombie target.</p> + </li> +</ul> + +<h2 id="visibility-debt">Visibility Debt</h2> + +<ul> + <li> + <p>Originally, the default visibility of all the targets was public, leading to unintended dependencies.</p> + </li> + <li> + <p>The visibility of all the existing builds was set to <em>legacy_public</em>, and the default visibility was changed to private.</p> + </li> + <li> + <p>This encouraged developers to explicitly consider if they wanted other projects to depend on their project.</p> + </li> +</ul> + +<h2 id="dead-flags">Dead Flags</h2> + +<ul> + <li> + <p>Google developed its command-line parsing utilities and defined a set of recognized command-line flags for libraries and binaries.</p> + </li> + <li> + <p>Overtime, the number of flags grew to half a million, and many of these flags are not useful anymore (i.e., dead).</p> + </li> + <li> + <p>These dead flags can it hard to understand and refactor code.</p> + </li> + <li> + <p>Existing flags are analyzed to check which ones have always been set to the same value and replaced by those contents, clearing about 150 thousand flags.</p> + </li> + <li> + <p>Removing dead flags also helps to clean up dead/unreachable code.</p> + </li> +</ul> + + + + + One Solution is Not All You Need - Few-Shot Extrapolation via Structured MaxEnt RL + + 2020-11-02T00:00:00-05:00 + /site/2020/11/02/One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Key idea: Practicing and remembering diverse solutions to a task can lead to robustness to that task’s variations.</p> + </li> + <li> + <p>The paper proposes a framework to implement this idea - train multiple policies such that they are <em>collectively</em> robust to a new distribution over environments while using a single training environment.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2010.14484">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>During training, the agent has access to only one MDP.</p> + </li> + <li> + <p>During the evaluation, the agent encounters a new MDP which has the same state and action space but may have a different reward and transition function.</p> + </li> + <li> + <p>The agent is allowed some interactions (say <em>k</em>) with the test MDP and is then evaluated on the test MDP. The setup is referred to as <em>few-shot robustness</em>.</p> + </li> +</ul> + +<h2 id="structured-maximum-entropy-reinforcement-learning-smerl">Structured Maximum Entropy Reinforcement Learning (SMERL)</h2> + +<ul> + <li> + <p>Represent a set of policies using a latent variable policy (i.e., a policy conditioned on a latent variable <em>z</em>).</p> + </li> + <li> + <p>This has two benefits: (i) Multiple policies can be represented by the same object, and (ii) diverse behaviors can be learned by encouraging the trajectories, corresponding to different <em>z</em> to be different, while being able to solve the task.</p> + </li> + <li> + <p>A diversity-inducing objective is used to encourage the agent to learn different trajectories for different <em>z</em>.</p> + </li> + <li> + <p>Specifically, the mutual information between <em>p(Z)</em> and marginal trajectory distribution for the latent variable policy is maximized, subject to the constraint that each policy achieves close to optimal returns in the train MDP.</p> + </li> + <li> + <p>The mutual information between <em>p(Z)</em> and marginal trajectory distribution for the latent variable policy is lower bounded by the sum of mutual information terms over individual states (appearing in the trajectory).</p> + </li> + <li> + <p>An unsupervised reward function is defined using the mutual information between states and latent variables.</p> + </li> + <li> + <p>\(r(s, a) = log(q_{\phi})(z\|s) - log(p(z))\) where \(q_{\phi}\) is a learned discriminator.</p> + </li> + <li> + <p>This unsupervised reward is optimized for only when the policy achieves close to an optimal return, i.e., the environment return is close to the optimal return. Otherwise, the agent optimizes only for the environment return.</p> + </li> +</ul> + +<h3 id="implementation">Implementation</h3> + +<ul> + <li> + <p>SMERL is implemented using SAC with a latent variable maximum entropy policy.</p> + </li> + <li> + <p>The set of latent variables is a fixed discrete set \(Z\) and \(p(z)\) is set to be a uniform distribution over this set.</p> + </li> + <li> + <p>At the start of an episode, a \(z\) is sampled and used throughout the episode.</p> + </li> + <li> + <p>Discriminator \(q_{\phi}(z\|s)\) is trained to infer \(z\) from the visited states.</p> + </li> + <li> + <p>A baseline SAC agent is trained beforehand to evaluate if the current training policy achieves close to optimal environment return.</p> + </li> + <li> + <p>During the evaluation, the policy corresponding to each latent variable is executed in the test MDP, and the policy with the maximum return is returned.</p> + </li> +</ul> + +<h2 id="theoretical-analysis">Theoretical Analysis</h2> + +<ul> + <li> + <p>Given an MDP \(M\) and \(\epsilon&gt;0\), the MDP robustness set is defined as the set of all MDPs \(M'\) where the optimal policy of \(M'\) produces the same trajectory distribution in \(M'\) as \(M\). Moreover, on the training MDP \(M\), the optimal policies (corresponding to \(M\) and \(M'\)) obtain similar returns.</p> + </li> + <li> + <p>The paper shows that SMERL generalizes to MDPs belong to the robustness set.</p> + </li> + <li> + <p>It also provides a simplified view of the optimization objective and shows how it naturally leads to a trajectory-centric mutual information objective.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Environments</p> + + <ul> + <li> + <p>2D navigation environments with point mass.</p> + </li> + <li> + <p>Mujoco Environments: HalfCheetah-Goal, Walker2d-Velocity, Hopper-Velocity.</p> + </li> + </ul> + </li> + <li> + <p>On the 2D navigation environment, the paper shows that SMERL learns to use different trajectories to reach the goal.</p> + </li> + <li> + <p>On the Mujoco setup, the evaluation shows that SMERL generally outperforms the best-performing baseline or is close to the best-performing baseline on different tasks.</p> + </li> + <li> + <p>Generally, higher train performance does not correlate with higher test performance, and there is no single policy that performs the best across all the tasks. Thus, it should be beneficial to learn multiple diverse policies that can be selected from during testing.</p> + </li> +</ul> + + + + + Learning Explanations That Are Hard To Vary + + 2020-10-19T00:00:00-04:00 + /site/2020/10/19/Learning Explanations That Are Hard To Vary + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper builds on the principle “good explanations are hard to vary” to propose that <em>invariant mechanisms</em> can be identified by finding explanations (say model parameters) that are hard to vary across examples.</li> + <li><a href="https://arxiv.org/abs/2009.00329">Link to the paper</a></li> + <li><a href="https://github.com/gibipara92/learning-explanations-hard-to-vary">Link to the code</a></li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li>Collection of <em>d</em> different datasets (from different environments). Each dataset is a collection of input-target tuples.</li> + <li>Objective is to learn a function <em>f</em> (also called <em>mechanism</em>) to map the input to the target (for all the environments).</li> + <li>The standard approach is to pool the loss for examples corresponding to the different environments and perform gradient updates on this average-pooled loss.</li> + <li>In this standard gradient-based setup, the model may not learn invariances due to the following reasons: + <ul> + <li>Model learned the spurious features first, and now the training loss is too small.</li> + <li>The pooled loss is generally computed by summing (or averaging) the loss corresponding to individual examples. Thus the gradient for each example is calculated independently. Each sample can be thought of as a dataset of size 1, for which all the features are relevant.</li> + <li>Gradient descent with averaging (of gradients across the environments) greedily maximizes for the learning speed and not invariance.</li> + </ul> + </li> + <li>Performing arithmetic mean can be seen as performing an OR operation (i.e., the sum can be high if any one of the constituents is high), whereas performing geometric mean can be seen as performing an AND operation (i.e., the product can be high only if all the constituents are high).</li> +</ul> + +<h3 id="invariant-learning-consistencyilc">Invariant Learning Consistency(ILC)</h3> + +<ul> + <li>Given an algorithm \(A\), let \(\theta_{A}^{*}\) denote the set of convergence points of \(A\) when trained on all the environments.</li> + <li>Each convergence point is associated with a consistency score.</li> + <li>Intuitively, given a convergence point and an environment <em>e</em>, find the set of parameters equivalent to the convergence point (in terms of loss) with respect to <em>e</em>. Let’s call this set as <em>S</em>.</li> + <li>Evaluate the points in this set for all the remaining environments. For the given convergence point, an environment <em>e’</em> is consistent with <em>e</em> if the maximum difference in the loss for two environments is small, for all points belonging to <em>S</em>.</li> + <li>This idea is used to define the invariant learning consistency score for algorithm \(A\), which measures the expected consistency of the converged points (on the pooled data) across all the environments.</li> + <li>The paper shows that the converged points’ consistency is linked to the Hessians’ geometric mean and that for the convex quadratic case, using the elementwise geometric mean of gradients improves consistency.</li> + <li>However, there are some practical challenges: + <ul> + <li>Geometric mean is defined only when all signs are consistent. This issue can potentially be handled by treating different signs as 0.</li> + <li>There is very little flexibility in “partial” agreement, and even a single zero gradient component can stop optimization for that component. This can probably be handled by not masking if many environments have a gradient for that component.</li> + <li>Geometric component needs to be computed in the log-domain (for numerical scalability), but that can be computationally more expensive.</li> + <li>When using adaptive optimizers like Adam, the exact magnitude of geometric mean will be ignored because of rescaling for the local curvature adaptation.</li> + </ul> + </li> + <li>Some of these challenges can be handled using average gradients when the geometric mean would be 0 and masking out components based on the sign.</li> +</ul> + +<h3 id="and-mask">AND-mask</h3> + +<ul> + <li>The ideas from the previous section can be used to develop a practical algorithm called AND-mask.</li> + <li>Zero-out gradients that have inconsistent signs across some threshold number (hyper-parameter) of environments.</li> + <li>In the presence of purely random gradient patterns, the AND-mask decreases the signals’ strength exponentially fast.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="synthetic-memorization-dataset">Synthetic Memorization Dataset</h3> + +<ul> + <li>This is a binary classification task with two kind of features: (i) “meaningful” features that are shared across environments but harder for the model to learn and (ii) “shortcut” features that are easy to learn but not shared across environments.</li> + <li>While the dataset may look simple, it is difficult to find the invariant mechanism because the “shortcut” features allow for a simple, linear decision boundary, with a large margin that is fast to learn, has perfect accuracy, robust to input noise, and no iid generalization gap.</li> + <li>Baselines: + <ul> + <li>MLPs trained with regularizers like dropout, L1, L2, and batch norm.</li> + <li>Domain Adversarial Neural Networks (DANN)</li> + <li>Invariant Risk Minimization (IRM)</li> + </ul> + </li> + <li>In terms of results, AND-mask with L1/L2 regularizers gives the best results.</li> + <li>Empirically, the paper shows that the signal from the “meaningful” features is present when the gradients are averaged, but their magnitude is much smaller than the signal from the “shortcut” features.</li> +</ul> + +<h3 id="experiments-on-cifar-10">Experiments on CIFAR-10</h3> + +<ul> + <li>A ResNet model is trained on the CIFAR-10 dataset with random labels, with and without the AND-mask.</li> + <li>The model with the AND-mask did not memorize the data, whereas the model without the AND-mask did. As sanity, the paper ensured that both the models generalize well when trained with the original labels.</li> + <li>Note that for this experiment, every example was treated to have come from its own environment.</li> +</ul> + +<h3 id="behavioral-cloning-on-coinrun">Behavioral Cloning on CoinRun</h3> + +<ul> + <li>Train an expert policy using PPO for 400M steps on the full distribution of levels.</li> + <li>Generate a dataset of state-action pairs. Training data consists of 1000 states from each of the 64 levels, while the test data comes from 2000 levels.</li> + <li>A ResNet18 model is used as an imitation learning policy.</li> + <li>The exact implementation of the AND-mask is a little more involved, but the key takeaway is that model trained with AND-mask identifies invariant mechanisms across different levels.</li> +</ul> + + + + + Remembering for the Right Reasons - Explanations Reduce Catastrophic Forgetting + + 2020-10-12T00:00:00-04:00 + /site/2020/10/12/Remembering for the Right Reasons - Explanations Reduce Catastrophic Forgetting + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper hypothesizes that catastrophic forgetting can happen if the model can not rely on “reasoning” used for an old datapoint. If that is the case, catastrophic forgetting may be alleviated when the model “remembers” why it made a prediction previously.</li> + <li>The paper presents a simple instantiation of this hypothesis, in the form of a technique called Remembering for the Right Reasons (RRR).</li> + <li>The idea is to store model explanations, along with previous examples in the replay buffer. During replay, an additional <em>explanation loss</em> is used, along with the regular replay loss.</li> + <li><a href="https://arxiv.org/abs/2010.01528">Link to the paper</a></li> + <li><a href="https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons">Link to the code</a></li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li>The model is trained over a sequence of data distributions in the class-incremental learning setup. A single-head architecture is used so that the task ID is not required during inference.</li> + <li>Along with the standard replay buffer (\(M^{rep}\)) for the raw input examples (from different tasks), another replay buffer (\(M^{RRR}\)) is maintained for storing the “explanations” (in the form of saliency maps), corresponding to examples in \(M^{rep}\).</li> + <li>RRR is implemented as an L1 loss on the error between the saliency map generated after training on the current task and the saliency map in \(M^{RRR}\).</li> + <li>Saliency maps need to be generated while the model is training. This requirement rules out black-box saliency methods, which can be used only after training.</li> + <li>The gradient-based white-box explainability techniques that are used include: + <ul> + <li>Vanilla backpropagation - Perform a forward pass through the model and take the gradient of the given output class with respect to the input.</li> + <li>Backpropagation with SmoothGrad - Saliency maps generated using Vanilla backpropagation can be visually noisy. These maps can be improved by adding pixel-wise Gaussian noise to <em>n</em> copies of the image and averaging the resulting gradients. The paper used <em>n=40</em>.</li> + <li>Gradient-weighted Class Activation Mapping (Grad-CAM) - Uses gradients to determine the importance of feature map activations on a given prediction.</li> + </ul> + </li> + <li>RRR can be easily used with memory and regularization based approaches.</li> + <li>The paper combined RRR with the following standard Class Incremental Learning (CIL) models: + <ul> + <li><a href="https://arxiv.org/abs/2003.11652">iTAML : An incremental task-agnostic meta-learning approach</a></li> + <li><a href="https://arxiv.org/abs/1807.09536">End-to-end incremental learning (EEIL)</a></li> + <li><a href="https://arxiv.org/abs/1905.13260">Large scale incremental learning (BiC)</a></li> + <li><a href="https://arxiv.org/abs/2004.10956">TOpology-Preserving knowledge InCrementer (TOPIC)</a></li> + <li><a href="https://arxiv.org/abs/1611.07725">iCaRL: Incremental Classifier and Representation Learning</a></li> + <li><a href="https://arxiv.org/abs/1612.00796">Elastic Weight Consolidation</a></li> + <li><a href="https://arxiv.org/abs/1606.09282">Learning without forgetting</a></li> + </ul> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="few-shiot-class-incremental-learning">Few-Shiot Class Incremental Learning</h3> + +<ul> + <li>C-way K-shot class incremental learning with C classes and K training samples per class and b base classes to learn as the first task.</li> + <li>Caltech-UCSD Birds dataset with 100 base classes and remaining 100 classes divided into ten tasks, with three samples per class. The test set is not changed.</li> + <li>In teems of saliency maps., Grad-CAM is better than Vanilla Backpropagation, which in turn is comparable to SmoothGrad. The same trend is seen in terms of memory overhead, with Grad-CAM having the least memory overhead.</li> + <li>Adding the RRR loss improves the performance of all the baselines.</li> +</ul> + +<h3 id="standard-class-incremental-learning">Standard Class Incremental Learning</h3> + +<ul> + <li>CIFAR100 and ImageNet100 with a memory budget of 2000 samples.</li> + <li>Adding the RRR loss improves all the baselines’ performance, and the gains for ImageNet100 are more significant than the gains for CIFAR100.</li> +</ul> + +<h3 id="how-often-does-the-model-remember-its-decision-for-the-right-reason">How often does the model remember its decision for the right reason?</h3> + +<ul> + <li>The paper uses the Pointing Game (PG) experiment, which uses the ground truth image segmentation to define the true object region.</li> + <li>If the maximum attention location (in the predicted saliency map) falls inside the objects, it is considered a <em>hit</em>, else a <em>miss</em>. A <em>hit</em> on a previous example is considered a proxy for the model remembering its decision for the right reason.</li> + <li>The precision and recall are reported for the <em>hit</em> metric. Using RRR increases both precision (i.e., less often the model makes the correct decision without looking at the right evidence) and recall (i.e., less frequently does the model makes an incorrect decision, despite looking at the proper evidence).</li> +</ul> + + + + + A Foliated View of Transfer Learning + + 2020-09-28T00:00:00-04:00 + /site/2020/09/28/A Foliated View of Transfer Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a formalism for transfer learning, offers a definition of relatedness between tasks, and proposes foliations as a mathematical framework to represent the relationship between tasks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2008.00546">Link to the paper</a></p> + </li> +</ul> + +<h2 id="summary">Summary</h2> + +<ul> + <li> + <p>The term <em>representation</em> denotes a mechanism for <em>describing</em> and <em>realizing</em> abstract objects, thus allowing manipulation and reasoning about the objects. This description goes beyond the usual meaning (in deep learning), where <em>representation</em> denotes some useful information about data.</p> + </li> + <li> + <p><em>Relatedness</em> describes <em>what</em> changes between tasks. Consider a set of transformations (or functions) that convert one task to another. A <em>relationship</em> between two tasks is an element of this transformation set.</p> + </li> + <li> + <p>Given a transformation set, one can define a <em>set of related tasks</em>, which is the set of all the tasks that can be transformed into each other using the functions from the given transformation set. This set of tasks is an equivalence class, and the transformation set is the equivalence relationship.</p> + </li> + <li> + <p>Given two related tasks <em>t1</em> and <em>t2</em>, denote the corresponding models (trained on those tasks) as <em>m1</em> and <em>m2</em>. One can assume that <em>m1</em> and <em>m2</em> are related in the same way as <em>t1</em> and <em>t2</em> (equivariance).</p> + </li> + <li> + <p>Now, given a set of transformations, one can partition the space of continuous functions into non-overlapping spaces, which describe a set of related tasks. These spaces are referred to as the <em>parallel spaces</em> or <em>transfer spaces</em>.</p> + </li> + <li> + <p>The parallel space represents a lower dimension than the original space. So knowing which parallel space a model lies on can make it easier to find it. This is the primary motivation behind transfer learning - knowing the relationship between tasks can make it easier to find a solution to new tasks.</p> + </li> + <li> + <p>Another way of partitioning the set of transformations is to use tessellation (e.g., Voronoi diagrams). Tasks in the same partition are similar to each other as compared to a task from another partition.</p> + </li> + <li> + <p>Two tasks are defined as <em>similar</em> if the distance between them (under some distance metric) is small.</p> + </li> + <li> + <p>Similarity is a <em>geometric</em> notion, while relatedness is a <em>transformative</em> notion. Parallelized space is to relatedness what tessellation is to similarity.</p> + </li> + <li> + <p>The distinction between similarity and relatedness is quite nuanced, and the authors provide several examples to differentiate between them.</p> + </li> + <li> + <p>Similarity can only be measured in terms of a reference element (similar to what). For example, when one finetunes a pre-trained model on a new task, one assumes that the model’s pretraining task is similar to the current task.</p> + </li> + <li> + <p>Given a set (say <em>T</em>), a <em>quantity</em> (a function that maps elemenets of <em>T</em> to a <em>k</em> dimensional vector) is said to be <em>invariant</em> with respect to a transformation <em>p</em> (defined on <em>T</em>) if <em>q(f) = q(p(f))</em> ie the value of <em>f</em> (belonging to <em>T</em>) does not change if <em>f</em> is transformed by <em>p</em>.</p> + </li> + <li> + <p>If one assumes that the set of transformations is a group, specifically a Lie group whose action on the set of tasks is locally free and regular, then one can define a parallel partitioning of the space of tasks and the space of models.</p> + </li> + <li> + <p>One can develop a hierarchial categorization scheme for the set of all considered tasks using the invariant quantities.</p> + </li> + <li> + <p>One can consider the space of tasks and models to be smooth manifolds as manifolds naturally give a notion of representation and transformations between them.</p> + </li> + <li> + <p>A manifold is a topological space that can be locally mapped to a Euclidean space using coordinate charts. One can define regular foliation by choosing charts that satisfy certain conditions. In that case, the manifold has immersed, connected, non-intersecting submanifolds called leaves.</p> + </li> + <li> + <p>The charts (that satisfies those conditions) give a set of rectified coordinates, where the notions of “which leaf a point is on” and “where on the leaf it is” are clearly separated.</p> + </li> + <li> + <p>Thus, foliation can provide the theoretical tools to work with parallel spaces.</p> + </li> + <li> + <p>How can the foliations be incorporated into theory and solutions for transfer learning is left aa future work.</p> + </li> +</ul> + + + + + Harvest, Yield, and Scalable Tolerant Systems + + 2020-09-21T00:00:00-04:00 + /site/2020/09/21/Harvest, Yield, and Scalable Tolerant Systems + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>A classic paper that looks into strategies for scaling large systems that can tolerate graceful degradation.</p> + </li> + <li> + <p><a href="https://dl.acm.org/doi/10.5555/822076.822436">Link to the paper</a></p> + </li> +</ul> + +<h2 id="cap-theorem">CAP Theorem</h2> + +<ul> + <li> + <p>CAP refers to strong <strong>C</strong>onsistency, high <strong>A</strong>vailability, and <strong>P</strong>artitionability.</p> + </li> + <li> + <p>Strong consistency refers to single copy ACID consistency.</p> + </li> + <li> + <p>High availability means any consumer can access the data anytime. Generally, this is achieved by adding one or more data replicas.</p> + </li> + <li> + <p>Partitionability means that the system can survive a partition between the different replicas.</p> + </li> + <li> + <p>Strong CAP theorem states that any system can have only two out of three properties.</p> + </li> + <li> + <p>Weak CAP theorem says that stronger are the guarantees about any two properties, weaker are the third property’s guarantees.</p> + </li> +</ul> + +<h2 id="harvest-yield-and-cap-theorem">Harvest, Yield, and CAP Theorem</h2> + +<ul> + <li> + <p>Assume that the clients are making a request to a server.</p> + </li> + <li> + <p>There are two quantities of interest here:</p> + + <ul> + <li>Yield - the probability of completing a request.</li> + <li>Harvest - completeness of answer to a query.</li> + </ul> + </li> + <li> + <p>In the presence of faults, a tradeoff can is made between yield and harvest. This tradeoff applies to both read and update queries.</p> + </li> +</ul> + +<h2 id="two-strategies-for-scaling-systems">Two strategies for scaling systems</h2> + +<h3 id="trading-harvest-for-yield">Trading Harvest for Yield</h3> + +<ul> + <li> + <p>In a hundred node cluster (without replication), a single-node failure reduces harvest by 1 %, and in the case of multi-node failure, the harvest degrades linearly.</p> + </li> + <li> + <p>The probability of losing high-priority data can be reduced by replicating it. However, replicating all the data would not n guarantee 100% harvest and yield despite significant costs.</p> + </li> +</ul> + +<h3 id="application-decomposition-and-orthogonal-mechanisms">Application Decomposition and Orthogonal Mechanisms</h3> + +<ul> + <li> + <p>Decompose a large application into subcomponents so that each component can be provisioned separately. Strong consistency can only be applied only on the components that need it, instead of the application as a whole.</p> + </li> + <li> + <p>Further, failure of one or more components need not cause the application to fail as a whole.</p> + </li> + <li> + <p>Decomposition also provides the opportunity to use orthogonal mechanisms, i.e., mechanisms independent of other mechanisms with no runtime interface.</p> + </li> + <li> + <p>Composition of orthogonal subsystems improves the robustness of runtime interactions by <em>locally</em> containing the errors. For example, the orthogonal components can be restarted /replaced independently without affecting other running components.</p> + </li> +</ul> + + + + + MONet - Unsupervised Scene Decomposition and Representation + + 2020-09-14T00:00:00-04:00 + /site/2020/09/14/MONet Unsupervised Scene Decomposition and Representation + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces Multi-Object Network (MONet) architecture that learns a modular representation of images by spatially decomposing scenes into <em>objects</em> and learning a representation for these <em>objects</em>.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1901.11390">Link to the paper</a></p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>Two components:</p> + + <ul> + <li> + <p>Attention Module: generates spatial masks corresponding to the <em>objects</em> in the scene.</p> + </li> + <li> + <p>VAE: learn representation for each <em>object</em>.</p> + </li> + </ul> + </li> + <li> + <p>VAE components:</p> + + <ul> + <li> + <p>Encoder: It takes as input the image and the attention mask generated by the attention module and produce the parameters for distribution over latent variable <em>z</em>.</p> + </li> + <li> + <p>Decoder: It takes as input the latent variable <em>z</em> and attempts to reproduce the image.</p> + </li> + </ul> + </li> + <li> + <p>The decoder loss term is weighted by mask, i.e., the decoder tries to reproduce only those parts of the image that the attention mask focuses on.</p> + </li> + <li> + <p>The attention mechanism is auto-regressive with an ongoing state (called a scope) that tracks which parts of the image are not yet attended over.</p> + </li> + <li> + <p>In the last step, no attention mask is computed, and the previous scope is used as-is. This ensures that all the masks sum to 1.</p> + </li> + <li> + <p>The VAE also models the attention mask over the components, i.e., the probability that the pixels belong to a particular component.</p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>A model could efficiently process compositional visual scenes if it can exploit some recurring structures in the scene.</p> + </li> + <li> + <p>The paper validates this hypothesis by showing that an autoencoder performs better if it can build up the scenes compositionally, processing one mask at a time (these masks are ground-truth spatial masks) rather than processing the scene at once.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>VAE encoder parameterizes a diagonal Gaussian latent posterior with a spatial broadcast decoder that encourages the VAE to learn disentangled features.</p> + </li> + <li> + <p>MONet with seven slots is trained on <em>Objects Room</em> dataset with 1-3 objects.</p> + + <ul> + <li> + <p>It learns to generate different attention mask for different objects.</p> + </li> + <li> + <p>Combining the reconstructed components using the corresponding attention masks produces good quality reconstruction for the entire scene.</p> + </li> + <li> + <p>Since it is an autoregressive model, MONet can be evaluated for more slots. The model generalizes to novel scene configurations (not seen during training).</p> + </li> + </ul> + </li> + <li> + <p>On the Multi-dSprites dataset (modification of the dSprites dataset), the model (post-training) distinguishes individual sprites and background.</p> + </li> + <li> + <p>On the CLEVER data (2-10 objects per image), the model generates good image segmentation and reconstructions and can distinguish between overlapping shapes.</p> + </li> +</ul> + + + + + Revisiting Fundamentals of Experience Replay + + 2020-09-07T00:00:00-04:00 + /site/2020/09/07/Revisiting Fundamentals of Experience Replay + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents an extensive study of the effects of experience replay in Q-learning based methods.</p> + </li> + <li> + <p>It focuses explicitly on the replay capacity and replay ratio (ratio of learning updates to experience collected).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2007.06700">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Replay capacity is defined as the total number of transitions stored in the replay buffer.</p> + </li> + <li> + <p>Age of a transition (stored in the replay buffer) is defined as the number of gradient steps taken by the agent since the transition was stored.</p> + </li> + <li> + <p>More is the replay capacity, more will be the age of the oldest transition (also referred to as the age of the oldest policy).</p> + </li> + <li> + <p>More is the replay capacity, more will be the degree of “off-policyness” of the transitions in the buffer (with everything else held constant).</p> + </li> + <li> + <p>Replay ratio is the number of gradient updates per environment transition. This ratio can be used as a proxy for how often the agent uses old data (vs. collecting new data) and is related to off-policyness.</p> + </li> + <li> + <p>In <a href="https://www.nature.com/articles/nature14236">DQN paper</a>, the replay ratio is set to be 0.25.</p> + </li> + <li> + <p>For experiments, a subset (of 14 games) is selected from Atari ALE (Arcade Learning Environment) with sticky actions.</p> + </li> + <li> + <p>Each experiment is repeated with three seeds.</p> + </li> + <li> + <p>Rainbow is used as the base algorithm.</p> + </li> + <li> + <p>Total number of gradient updates and batch size (per gradient update) are fixed for all the experiments.</p> + </li> + <li> + <p>Rainbow used replay capacity of 1M and oldest policy of age 250K.</p> + </li> + <li> + <p>In experiments, replay capacity varies from 0.1M to 10M ( 5 values), and the age of the oldest policy varies from 25K to 25M (4 values).</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>With the age of the oldest policy fixed, performance improves with higher replay capacity, probably due to increased state-action coverage.</p> + </li> + <li> + <p>With fixed replay capacity, reducing the oldest policy’s age improves performance, probably due to the reduced off-policyness of the data in the replay buffer.</p> + </li> + <li> + <p>However, in some specific instances (with sparse reward, hard exploration setup), performance can drop when reducing the oldest policy’s age.</p> + </li> + <li> + <p>Increasing replay capacity, while keeping the replay ratio fixed, provides varying improvements and depends on the particular values of replacy capacity and replay ratio.</p> + </li> + <li> + <p>The paper reports the effect of these choices for DQN as well.</p> + </li> + <li> + <p>Unlike Rainbow, DQN does not improve with larger replay capacity, irrespective of whether the replay ratio or age of the oldest policy is kept fixed.</p> + </li> + <li> + <p>Given that the Rainbow agent is a DQN agent with additional components, the paper explores which of these components leads to an improvement in Rainbow’s performance as replay capacity increases.</p> + </li> +</ul> + +<h2 id="additive-experiments">Additive Experiments</h2> + +<ul> + <li> + <p>Four new DQN variants are created by adding each of Rainbow’s four components to the base DQN agent.</p> + </li> + <li> + <p>DQN with n-step returns is the only variant that benefits by increased replay capacity.</p> + </li> + <li> + <p>The usefulness of n-step returns is further validated by verifying that Rainbow agent without n-step returns does not benefit by increased replay capacity. While Rainbow agent without any other component benefits by the increased capacity.</p> + </li> + <li> + <p>Prioritized Experience Replay does not significantly affect the performance with increased replay capacity.</p> + </li> + <li> + <p>The observation that n-step returns are critical for taking advantage of larger replay sizes is surprising because the uncorrected n-step returns are theoretically not suitable for off-policy learning.</p> + </li> + <li> + <p>The paper tests the limits of increasing replay capacity (with n-step returns) by performing experiments in the offline-RL setup, the agent collects a dataset of about 200M frames. These frames are used to train another agent.</p> + </li> + <li> + <p>Even in this extreme setup, n-step returns improve the learning agent’s performance.</p> + </li> +</ul> + +<h2 id="why-do-n-step-returns-help">Why do n-step returns help?</h2> + +<ul> + <li> + <p>Hypothesis 1: n-step returns help to counter the increased off-policyness produced by a larger replay buffer.</p> + + <ul> + <li>This hypothesis does not seem to hold as keeping the oldest policy fixed or using the same contrastive factor as an n-step update does not improve the 1-step update’s performance.</li> + </ul> + </li> + <li> + <p>Hypothesis 2: Increasing the replay buffer’s capacity may reduce the variance of the n-step returns.</p> + + <ul> + <li> + <p>This hypothesis is evaluated by training on environments with lesser variance or by turning off the sticky actions in the atari domain.</p> + </li> + <li> + <p>While the hypothesis does explain the gains by using n-step returns to some extent, n-step gains are observed even in environments with low variance.</p> + </li> + </ul> + </li> +</ul> + + + + + Deep Reinforcement Learning and the Deadly Triad + + 2020-08-31T00:00:00-04:00 + /site/2020/08/31/Deep Reinforcement Learning and the Deadly Triad + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper investigates the practical impact of the deadly triad (function approximation, bootstrapping, and off-policy learning) in deep Q-networks (trained with experience replay).</p> + </li> + <li> + <p>The deadly triad is called so because when all the three components are combined, TD learning can diverge, and value estimates can become unbounded.</p> + </li> + <li> + <p>However, in practice, the component of the deadly triad has been combined successfully. An example is training DQN agents to play Atari.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1812.02648">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>The effect of each component of the triad can be regulated with some design choices:</p> + + <ul> + <li> + <p>Bootstrapping - by controlling the number of steps before bootstrapping.</p> + </li> + <li> + <p>Function approximation - by controlling the size of the neural network.</p> + </li> + <li> + <p>Off-policy learning - by controlling how data points are sampled from the replay buffer (i.e., using different prioritization approaches)</p> + </li> + </ul> + </li> + <li> + <p>The problem is studied in two contexts: toy example and Atari 2600 games.</p> + </li> + <li> + <p>The paper makes several hypotheses about how different components may interact in the triad and evaluate these hypotheses by training DQN with different hyperparameters:</p> + + <ul> + <li> + <p>Number of steps before bootstrapping - 1, 3, 10</p> + </li> + <li> + <p>Four levels of prioritization (for sampling data from the replay buffer)</p> + </li> + <li> + <p>Bootstrap target - Q-learning, target Q-learning, inverse double Q-learning, and double Q-learning</p> + </li> + <li> + <p>Network sizes-small, medium, large and extra-large.</p> + </li> + </ul> + </li> + <li> + <p>Each experiment was run with three different seeds.</p> + </li> + <li> + <p>The paper formulates a series of hypotheses and designs experiments to support/reject the hypotheses.</p> + </li> +</ul> + +<h2 id="hypothesis-1-combining-q-learning-with-conventional-deep-rl-function-spaces-does-not-commonly-lead-to-divergence">Hypothesis 1: Combining Q learning with conventional deep RL function spaces does not commonly lead to divergence</h2> + +<ul> + <li> + <p>Rewards are clipped between -1 and 1, and the discount factor is set to 0.99. Hence, the maximum absolute action value is bound to smaller than 100. This upper bound is used soft-divergence in the value estimates.</p> + </li> + <li> + <p>The paper reports that while soft-divergence does occur, the values do not become unbounded, thus supporting the hypothesis.</p> + </li> +</ul> + +<h2 id="hypothesis-2-there-is-less-divergence-when-correcting-for-overestimation-bias-or-when-bootstrapping-on-separate-networks">Hypothesis 2: There is less divergence when correcting for overestimation bias or when bootstrapping on separate networks.</h2> + +<ul> + <li> + <p>One manifestation of bootstrapping on separate networks is target-Q learning. While using separate networks helps on Atari, it does not entirely solve the problem on the toy setup.</p> + </li> + <li> + <p>One manifestation of correcting for the overestimation bias is using double Q-learning.</p> + </li> + <li> + <p>In the standard form, double Q-learning benefits by bootstrapping on a separate network. To isolate the gains by using each component independently, an inverse double Q-learning update is used that does not use a separate target-network for bootstrapping.</p> + </li> + <li> + <p>Experimentally, Q-learning is the most unstable while target Q-learning and double Q-learning are the most stable. This observation supports the hypothesis.</p> + </li> +</ul> + +<h2 id="hypothesis-3-longer-multi-step-returns-will-diverge-easily">Hypothesis 3: Longer multi-step returns will diverge easily</h2> + +<ul> + <li> + <p>This hypothesis is intuitive as the dependence on bootstrapping is reduced with multi-step returns.</p> + </li> + <li> + <p>Experimental results support this hypothesis.</p> + </li> +</ul> + +<h2 id="hypothesis-4-larger-more-capacity-networks-will-diverge-less-easily">Hypothesis 4: Larger, more capacity networks will diverge less easily.</h2> + +<ul> + <li> + <p>This hypothesis is based on the assumption that more flexible value function approximations may behave more like the tabular case.</p> + </li> + <li> + <p>In practice, smaller networks show fewer instances of instability than the larger networks.</p> + </li> + <li> + <p>The hypothesis is not supported by the experiments.</p> + </li> +</ul> + +<h2 id="hypothesis-5-stronger-prioritization-of-updates-will-diverge-more-easily">Hypothesis 5: Stronger prioritization of updates will diverge more easily.</h2> + +<ul> + <li>This hypothesis is supported by the experiments for all the four updates.</li> +</ul> + +<h2 id="effect-of-the-deadly-triad-on-the-agents-performance">Effect of the deadly triad on the agent’s performance</h2> + +<ul> + <li> + <p>Generally, soft-divergence correlates with poor control performance.</p> + </li> + <li> + <p>For example, longer multi-step returns lead to fewer instances of instabilities and better performance.</p> + </li> + <li> + <p>The trend is more interesting in terms of network capacity. Large networks tend to diverge more but also perform the best.</p> + </li> + <li> + <p>While action-value estimates can grow to large values, they can recover to plausible values as training progresses.</p> + </li> +</ul> + + + + + Alpha Net--Adaptation with Composition in Classifier Space + + 2020-08-24T00:00:00-04:00 + /site/2020/08/24/Alpha Net--Adaptation with Composition in Classifier Space + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Common transfer learning method focuses on transferring knowledge in the model feature space.</p> + </li> + <li> + <p>In contrast, the paper argues that the learned knowledge is more concisely captured in the “classifier space” as the classifier is fitted for all the samples for a given class, while the feature representation is specific to each sample.</p> + </li> + <li> + <p>Building on this intuition, the paper proposes to combine strong classifiers (trained on large datasets) with weak classifiers (trained on smaller datasets) to improve the weak classifiers’ performance.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2008.07073">Link to the paper</a></p> + </li> +</ul> + +<h2 id="high-level-idea">High-Level Idea</h2> + +<ul> + <li> + <p>Given $n$ classifiers, $C_1, …, C_n$, trained with a large amount of data and a weak classifier $a$ trained for a class with few samples.</p> + </li> + <li> + <p>Find the nearest neighbors of $a$.</p> + </li> + <li> + <p>Train a new classifier by linearly combining $a$ with its nearest classifiers.</p> + </li> + <li> + <p>The coefficients (for linearly combining the classifiers) are learned using another classifier called as AlphaNet.</p> + </li> + <li> + <p>In theory, this approach can be used with any set of classifiers.</p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>A long-tailed dataset is one where some classes (referred to as the tail classes) have very few examples—for example, ImageNet-LT and Places-LT.</p> + </li> + <li> + <p>Split the long-tailed dataset into two splits - “base” classes with $B$ (number of) classes and “few” classes with $F$ (number of) classes.</p> + </li> + <li> + <p>Total number of classes $N = B + F$.</p> + </li> + <li> + <p>Start with a pre-trained model, with classifiers $w_j$ and biases $b_j$ for $j \in (1, N)$.</p> + </li> + <li> + <p>For a given target class $j$, find its top $k$ nearest neighbor classifiers and concatenate their output.</p> + </li> + <li> + <p>For each “few” class, learn a feedforward network that takes the concatenated representation (of classifiers) as the input and returns a vector of $k \alpha$ values.</p> + </li> + <li> + <p>These $\alpha$ values are interpreted as the classifier’s strength (or confidence) in its nearest neighbors.</p> + </li> + <li> + <p>The (normalized) alpha values are used for defining the weight and bias for the classifier for the given “few” class.</p> + </li> + <li> + <p>The collection of all the “few” classifiers is referred to as the AlphaNet.</p> + </li> + <li> + <p>The paper outlines a degenerate case, where the confidence in the prediction of all the strong classifiers goes to 0. The paper proposes to counter this case by clamping the $\alpha$ values.</p> + </li> + <li> + <p>The entire setup is trained end-to-end using cross-entropy loss on AlphaNet.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Given the proposed approach’s flexibility, it is used to combine the state-of-the-art models on ImageNet-LT, namely retraining classifiers on class-balanced samples and training models with weight normalization. The combined setup outperforms the individual models.</p> + </li> + <li> + <p>One interesting observation is that it is useful to include the weak classifiers, along with the strong classifiers, as AlphaNet adjusts the position of weak classifiers towards the appropriate strong classifier.</p> + </li> + <li> + <p>While the idea is described in the context of long-tail data distribution, the idea is useful in the general context of non-stationary data distribution. One instantiation could be lifelong class incremental learning where the model encounters new data classes during training. For some time duration (till sufficient data points are seen), the newly seen classes are the “few” classes. This approach can help with faster adaptation when the model is yet to see sufficient examples for the unseen classes.</p> + </li> +</ul> + + + + + Outrageously Large Neural Networks--The Sparsely-Gated Mixture-of-Experts Layer + + 2020-08-14T00:00:00-04:00 + /site/2020/08/14/Outrageously Large Neural Networks--The Sparsely-Gated Mixture-of-Experts Layer + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Conditional computation is a technique to increase a model’s capacity (without a proportional increase in computation) by activating parts of the network on a per example basis.</p> + </li> + <li> + <p>The paper describes (and address) the computational and algorithmic challenges in conditional computation. It introduces a sparsely-gated Mixture-of-Experts layer (MoE) with 1000s of feed-forward sub-networks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1701.06538">Link to the paper</a></p> + </li> +</ul> + +<h2 id="practical-challenges">Practical Challenges</h2> + +<ul> + <li> + <p>GPUs are fast at matrix arithmetic but slow at branching.</p> + </li> + <li> + <p>Large batch sizes amortizes the cost of updates. Conditional computation reduces the effective batch size for different components of the model.</p> + </li> + <li> + <p>Network bandwidth can be a bottleneck with the network demand overshadowing the computational demand.</p> + </li> + <li> + <p>Additional losses may be needed to achieve the desired level of sparsity.</p> + </li> + <li> + <p>Conditional computation is most useful for large datasets.</p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p><em>n</em> Expert Networks - $E_1$, …, $E_n$.</p> + </li> + <li> + <p>Gating Network $G$ to select a sparse combination of experts.</p> + </li> + <li> + <p>Output of the MoE module is the weighted sum of predictions of experts (weighted by the output of the gate).</p> + </li> + <li> + <p>If the gating network’s output is sparse, then some of the experts’ value does not have to be computed.</p> + </li> + <li> + <p>In theory, one could use a hierarchical mixture of experts where a mixture of experts is trained at each level.</p> + </li> +</ul> + +<h3 id="choices-for-the-gating-network">Choices for the Gating Network</h3> + +<ul> + <li> + <p>Softmax Gating</p> + </li> + <li> + <p>Noisy top-k gating - Add tunable Gaussian noise to the output of softmax gating and retain only the top-k values. A second trainable weight matrix controls the amount of noise per component.</p> + </li> +</ul> + +<h2 id="addressing-performance-challenge">Addressing Performance Challenge</h2> + +<ul> + <li> + <p>Shrinking Batch Problem</p> + + <ul> + <li> + <p>If the MoE selects <em>k</em> out of <em>n</em> experts, the effective batch size reduces by a factor of <em>k</em> / <em>n</em>.</p> + </li> + <li> + <p>This reduction in batch size is accounted for by combining data parallelism (for standard layers and gasting networks) and model parallelism (for experts in MoE). Thus, with <em>d</em> devices, the batch size changes by a factor of (<em>k</em> x <em>d</em> ) / <em>n</em>.</p> + </li> + <li> + <p>For hierarchical MoE, the primary gating network uses data parallelism while secondary MoEs use model parallelism.</p> + </li> + <li> + <p>The paper considers LSTM models where the MoE is applied once the previous layer has finished. This increases the batch size (for the current MoE layer) by a factor equal to the number of unrolling timesteps.</p> + </li> + <li> + <p>Network Bandwith limitations can be overcome by ensuring that the ratio of computation (of each expert) to the input and output size is greater than (or equal to) the ratio of computational to network capacity.</p> + </li> + <li> + <p>Computational efficiency can be improved by using larger hidden layers (or more hidden layers).</p> + </li> + </ul> + </li> + <li> + <p>Balancing Expert Utilization</p> + + <ul> + <li> + <p>Importance of an expert (relative to a batch of training examples) is defined as the batchwise sum of the expert’s goal values.</p> + </li> + <li> + <p>An additional loss, called importance loss, is added to encourage the experts to have equal importance.</p> + </li> + <li> + <p>The importance loss is defined as the square of the coefficient of variation (of a set of importance values) multiplied by a (hand-tuned) scaling factor $w_{importance}$.</p> + </li> + <li> + <p>In practice, an additional loss called $L_{load}$ might be needed to ensure that the different experts get equal load (along with equal importance).</p> + </li> + </ul> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets</p> + + <ul> + <li> + <p>Billon Word Language modeling Benchmark</p> + </li> + <li> + <p>100 Billion word Google News Corpus</p> + </li> + <li> + <p>Machine Translation datasets</p> + + <ul> + <li> + <p>Single Language Pairs - WMT’14 En to Fr (36M sentence pairs) and En to De (5M sentence pairs).</p> + </li> + <li> + <p>Multilingual Machine Translation - large combine dataset of twelve language pairs.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>In all the setups, the proposed MoE models achieve significantly better results than the baseline models, at a lower computational cost.</p> + </li> +</ul> + + + + + Gradient Surgery for Multi-Task Learning + + 2020-08-06T00:00:00-04:00 + /site/2020/08/06/Gradient Surgery for Multi-Task Learning + <ul> + <li> + <p>The paper hypothesizes that main optimization challenges in multi-task learning arise because of negative interference between different tasks’ gradients.</p> + </li> + <li> + <p>It hypothesizes that negative interference happens when:</p> + + <ul> + <li> + <p>The gradients are conflicting (i.e., have a negative cosine similarity).</p> + </li> + <li> + <p>The gradients coincide with high positive curvature.</p> + </li> + <li> + <p>The difference in gradient magnitude is quite large.</p> + </li> + </ul> + </li> + <li> + <p>The paper proses to work around this problem by performing “gradient surgery.”</p> + </li> + <li> + <p>If two gradients are conflicting, modify the gradients by projecting each onto the other’s normal plane.</p> + </li> + <li> + <p>This modification is equivalent to removing the conflicting component of the gradient.</p> + </li> + <li> + <p>This approach is referred to as <em>projecting conflicting gradients</em> (PCGrad).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2001.06782">Link to the paper</a></p> + </li> + <li> + <p>Theoretical Analysis</p> + + <ul> + <li> + <p>The paper proves the local conditions under which PCGrad improves multi-task gradient descent in the two-task setup.</p> + </li> + <li> + <p>The conditions are:</p> + + <ul> + <li> + <p>Angle between the task gradients is not too small.</p> + </li> + <li> + <p>Difference in the magnitude of the gradients is sufficiently large.</p> + </li> + <li> + <p>Curvature of the multi-task gradient is large.</p> + </li> + <li> + <p>Large enough learning rate.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Experimental Setup</p> + + <ul> + <li> + <p>Multi-task supervised learning</p> + + <ul> + <li> + <p>MutliMNIST, Multi-task CIFAR100, NYUv2.</p> + </li> + <li> + <p>For Multi-task CIFAR-100, PCGrad is used with the shared parameters of the routing networks.</p> + </li> + <li> + <p>For NYUv2, PCGrad is combined with MTAN.</p> + </li> + <li> + <p>In all the cases, using PCGrad improves the performance.</p> + </li> + </ul> + </li> + <li> + <p>Multi-task Reinforcement Learning</p> + + <ul> + <li> + <p>Meta-World Benchmark</p> + </li> + <li> + <p>PCGrad + SAC outperforms all other baselines.</p> + </li> + <li> + <p>In the context of SAC, the paper suggests learning temperature $\alpha$ on a per-task basis.</p> + </li> + </ul> + </li> + <li> + <p>Goal-conditioned Reinforcement Learning</p> + + <ul> + <li> + <p>Goal-conditioned robotic pushing task with a Sawyer robot.</p> + </li> + <li> + <p>PCGrad + SAC outperforms vanilla SAC.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + + + + + GradNorm--Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks + + 2020-07-30T00:00:00-04:00 + /site/2020/07/30/GradNorm--Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes GradNorm, a gradient normalization algorithm that improves multi-task training by dynamically tuning the magnitude of gradients corresponding to different tasks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1711.02257">Link to the paper</a></p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>During multi-task training, some tasks can dominate the training, at the expense of others.</p> + </li> + <li> + <p>It is common to define the multi-task loss as a linearly weighted combination of the individual task losses.</p> + </li> + <li> + <p>The paper proposes two changes to this setup:</p> + + <ul> + <li> + <p>Adapt weight-coefficients, assigned to each loss term, at each training step.</p> + </li> + <li> + <p>Directly modify the gradient magnitudes, corresponding to different tasks, so that all the tasks are learning at similar rates.</p> + </li> + </ul> + </li> + <li> + <p>Proposed GradNorm algorithm is similar to BatchNorm, but it performs normalization across tasks, not data batches.</p> + </li> +</ul> + +<h2 id="algorithm">Algorithm</h2> + +<ul> + <li> + <p>Gradient norm at timestep $t$, for the $i^{th}$ task, is computed as the product between average gradient norm (across all tasks at timestep $t$) and $r_i(t) ^ {\alpha}$.</p> + </li> + <li> + <p>$r_i$ is the relative inverse training rate of task $i$. It is defined as the ratio between the loss ratio of task $i$ and the average loss ratio (across all the tasks).</p> + </li> + <li> + <p>$\alpha$ is a hyperparameter.</p> + </li> + <li> + <p>This computed per-task gradient norm is treated as the target value for actual gradient norms.</p> + </li> + <li> + <p>An additional $L_1$ loss is incorporated between the actual and the target gradient norms, summed over all the tasks, and optimizes the weight-coefficients only.</p> + </li> + <li> + <p>After every step, the weight-coefficients are renormalized to decouple the gradient normalization from the global learning rate.</p> + </li> + <li> + <p>Note that all the gradient norm computations are performed only for the layers on which GradNorm is applied. Generally, GradNorm is used with only the last shared layer of weights (to save on computational costs).</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Two variants of NYUv2 dataset – NYUv2+seg (small dataset) and NYUv2+kpts (big dataset).</p> + </li> + <li> + <p>Both regression and classification setups were used.</p> + </li> + <li> + <p>Models:</p> + + <ul> + <li> + <p>SegNet with a symmetric VGG16 encoder/decoder</p> + </li> + <li> + <p>FCN with modified ResNet-50 as the encoder and shallow ResNet as the decoder.</p> + </li> + </ul> + </li> + <li> + <p>Standard pixel-wise losses for each task.</p> + </li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li> + <p>GradNorm with $\alpha=1.5$ outperforms the equal-weight baseline and either surpasses or matches the best performance of single networks for each task.</p> + </li> + <li> + <p>Almost any value of 0 &lt; $\alpha$ &lt; 3 improves the network’s performance over an equal weight baseline.</p> + </li> +</ul> + + + + + TaskNorm--Rethinking Batch Normalization for Meta-Learning + + 2020-07-23T00:00:00-04:00 + /site/2020/07/23/TASKNORM--Rethinking Batch Normalization for Meta-Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Meta-learning techniques are shown to benefit from the use of deep neural networks.</p> + </li> + <li> + <p>BatchNorm is a commonly used component when training deep networks, especially for vision tasks.</p> + </li> + <li> + <p>However, BatchNorm and meta-learning make contradictory assumptions, and their combination may not work well in practice.</p> + </li> + <li> + <p>The paper proposes TaskNorm, a normalization method that is designed explicitly for meta-learning.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2003.03284">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Standard meta-learning setup with $k$ tasks, each task with its own context and target set.</p> + </li> + <li> + <p>Two sets of parameters are considered during meta-learning - (i) global parameters, and (ii) task-specific parameters.</p> + </li> + <li> + <p>Meta-learning setup can be viewed as an inference task, where the task-specific parameters are inferred using a context set and some additional (trainable) parameters.</p> + </li> + <li> + <p>Normalization layers are commonly used to accelerate the training of neural networks. The general approach is to use normalization moments (statistics) along with some learned parameters.</p> + </li> + <li> + <p>BatchNorm is a well-known and widely used normalization approach. It relies on the implicit assumption that the dataset comprises of iid samples from some underlying distribution.</p> + </li> + <li> + <p>However, in meta-learning, data points are assumed to be iid only within a specific task.</p> + </li> + <li> + <p>This leaves open the question of what moments to use during meta-train and meta-test time.</p> + </li> +</ul> + +<h2 id="variants-of-batchnorm">Variants of BatchNorm</h2> + +<h3 id="conventional-batchnorm-cbn">Conventional BatchNorm (CBN)</h3> + +<ul> + <li> + <p>Compute moments at meta train time and use during meta test time.</p> + </li> + <li> + <p>This is equivalent to lumping the moments with the global parameters. I.e., the running moments are shared globally, while the data is iid only locally.</p> + </li> + <li> + <p>Using CBN with MAML leads to poor results.</p> + </li> + <li> + <p>Moreover, meta-learning setup can some times require the use of a very small batch size. (e.g., 1-shot learning) In those cases, the computed statistics are likely to be inaccurate.</p> + </li> +</ul> + +<h3 id="transductive-batchnorm-tbn">Transductive BatchNorm (TBN)</h3> + +<ul> + <li> + <p>Use context/target set statistics at both meta-train and meta-test time.</p> + </li> + <li> + <p>This is the default BatchNorm mode used in MAML.</p> + </li> +</ul> + +<h3 id="instance-based-normalization">Instance-based normalization</h3> + +<ul> + <li> + <p>Moments are computed separately for each instance.</p> + </li> + <li> + <p>This mode corresponds to treating the statistics as local at the observation level.</p> + </li> + <li> + <p>These methods provide only limited improvement in performance, and can sometimes have a large overhead.</p> + </li> +</ul> + +<h2 id="task-normalization-proposed">Task Normalization (Proposed)</h2> + +<ul> + <li> + <p>The normalization statistics are local at the task level, and statistics for a given data point should only depend on the context set’s data point. It should not depend on the other elements of the target set.</p> + </li> + <li> + <p>Meta-Batch Normalisation (METABN) is a precursor to TaskNorm where the context set alone is used to compute the normalization statistics for both the context and the target set (during both meta-test and meta-train time).</p> + </li> + <li> + <p>METABN does not perform well when used with small context sets.</p> + </li> + <li> + <p>TaskNorm overcomes this limitation by using a set of non-transductive, secondary moments (computed from the input being normalized).</p> + </li> + <li> + <p>When the context is small, using additional moments will help to improve the moment estimates.</p> + </li> + <li> + <p>In the general case, a trainable blending factor, $\alpha$, is used to combine the two sets of moments.</p> + </li> + <li> + <p>While the computational cost of TaskNorm is slightly more than CBN, it converges faster than CBN in practice.</p> + </li> + <li> + <p>Normalization mechanism in Reptile can be interpreted as a particular case of TaskNorm.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Small scale few-shot classification experiments</p> + + <ul> + <li> + <p>Omniglot and imin ImageNet dataset</p> + </li> + <li> + <p>First order MAML, with different kinds of normalization schemes.</p> + </li> + <li> + <p>Transductive BatchNorm performs the best.</p> + </li> + <li> + <p>Among non-transductive approaches, TaskNorm using Instance Normalisation augmentation performs the best.</p> + </li> + <li> + <p>Similar trend holds for the speed of convergence as well.</p> + </li> + </ul> + </li> + <li> + <p>Large scale few-shot classification experiments</p> + + <ul> + <li> + <p>MetaDataset dataset</p> + </li> + <li> + <p>CNAPs model</p> + </li> + <li> + <p>The context set’s size varies across tasks in this setup and can be as small as 5.</p> + </li> + <li> + <p>TaskNorm with Instance Normalisation ranks first in 10 (out of 13) datasets and is also the fastest to train.</p> + </li> + <li> + <p>While Instance-based methods (Instance Normalisation and Layer Normalisation) are the slowest to converge, they still outperform the running average based methods (conventional BatchNorm).</p> + </li> + <li> + <p>The results demonstrate that designing meta-learning specific normalization methods can significantly improve performance and that Transductive BatchNorm may not always be the optimal choice.</p> + </li> + </ul> + </li> +</ul> + + + + + Averaging Weights leads to Wider Optima and Better Generalization + + 2020-07-16T00:00:00-04:00 + /site/2020/07/16/Averaging Weights leads to Wider Optima and Better Generalization + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes Stochastic Weight Averaging (SWA) procedure for improving the generalization performance of models trained with SGD (with cyclic or constant learning rate).</p> + </li> + <li> + <p>Specifically, the model is checkpointed at several points along the training trajectory, and these checkpoints are averaged (in the parameter space) to obtain a single model.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1803.05407">Link to the paper</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>“Stochastic” in the name refers to the idea that with cyclical or constant learning rate, SGD proposals are approximately sampled from a neural network’s loss surface and are hence stochastic.</p> + </li> + <li> + <p>SWA uses a learning rate schedule that allows exploration in the weight space.</p> + </li> + <li> + <p>SGD with cyclical and constant learning rates explore points (model instances) at the periphery of high-performing networks.</p> + </li> + <li> + <p>With different initializations, SGD will find different points (of low training loss) on this boundary, but will not move inside it.</p> + </li> + <li> + <p>Averaging the points provide a mechanism to move inside this periphery.</p> + </li> + <li> + <p>The train and the test error surfaces, while being similar, are not perfectly aligned. Hence, averaging several models (along the optimization trajectory) could lead to a more robust model.</p> + </li> +</ul> + +<h2 id="algorithm">Algorithm</h2> + +<ul> + <li> + <p>Given a model $w$ and some training budget $B$, train the model in the conventional way for approx 75% of the budget.</p> + </li> + <li> + <p>Starting from that point, continue training with the remaining budget, with a constant or cyclical learning rate.</p> + </li> + <li> + <p>For fixed learning rate, checkpoint models at each epoch. For cyclical learning rate, checkpoint the model at the lowest learning rate in the cycle.</p> + </li> + <li> + <p>Average all the models to get the SWA model.</p> + </li> + <li> + <p>If the model has Batch Normalization layers, run an additional pass to compute the SWA model’s running mean and standard deviation.</p> + </li> + <li> + <p>The computational and space complexity of computing the SWA model is relatively low.</p> + </li> + <li> + <p>The paper highlights the ensembling like the effect of SWA by showing that if the model checkpoints ($w_i$) are generated by training with Fast Geometric Ensembling (FGE), the difference between averaging the weights and averaging the predictions is of the order $O(\Delta)$ where $\Delta = max ||w_i - w_{SA}||$.</p> + </li> + <li> + <p>Note that SWA does not have the overhead of an extra-forward pass during inference.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets: CIFAR10, CIFAR100, ImageNet</p> + </li> + <li> + <p>Models: VGG16, WideResNet, 164-layer preactivation ResNet, ShakeShake, Pyramid Net.</p> + </li> + <li> + <p>Baselines: Conventional SGD, Exponentially decaying average with SGD and FGE.</p> + </li> + <li> + <p>In all the CIFAR experiments, SWA consistently outperforms SGD in one budget and consistently improves with training.</p> + </li> + <li> + <p>SWA also achieves performance comparable to FGE, despite FGE being an ensemble method.</p> + </li> + <li> + <p>On ImageNet, SWA is run on a pre-trained model, and it improves performance in all the cases.</p> + </li> + <li> + <p>An ablation experiment (on CIFAR-100) shows that it is possible to train a network (with SWA) using a fixed learning rate. In that setup, using SWA improves performance by 16%.</p> + </li> +</ul> + + + + + Decentralized Reinforcement Learning -- Global Decision-Making via Local Economic Transactions + + 2020-07-09T00:00:00-04:00 + /site/2020/07/09/Decentralized Reinforcement Learning -- Global Decision-Making via Local Economic Transactions + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper explores the connections between the concepts of a single agent vs. society of agents.</p> + </li> + <li> + <p>A society of agents can be modeled as a single agent while a single agent can be modeled as a society of components (or sub-agents).</p> + </li> + <li> + <p>The paper focuses on mechanisms for training a society of self-interested agents to solve a given task – as if the system was a single task.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2007.02382">Link to the paper</a></p> + </li> +</ul> + +<h2 id="contributions">Contributions</h2> + +<ul> + <li> + <p><strong>Societal-decision making</strong> framework relates the local optimization problem of a single agent with the global optimization problem of a society of agents.</p> + </li> + <li> + <p><strong>Cloned Vickrey Society</strong> is proposed as a mechanism to guarantee that an agent’s dominant strategy equilibrium coincides with the group’s optimal policy.</p> + </li> + <li> + <p>A class of <strong>decentralized RL algorithms</strong> that optimize the MDP object of the society as a whole, as a consequence of individual agents optimizing their objectives.</p> + </li> + <li> + <p>Empirical evaluation of Cloned Vickrey Society using any implementation called <strong>Credit Conserving Vickery</strong>.</p> + </li> +</ul> + +<h2 id="terminology">Terminology</h2> + +<ul> + <li> + <p><em>Environment</em> - a tuple that specifies an input space, an output space, and parameters for determining an objective.</p> + + <ul> + <li>A standard RL setup can be mapped to <em>environment</em> by mapping state space to input space, action space to output space and reward function, transition function, and discount factors to the parameters specifying the objective.</li> + </ul> + </li> + <li> + <p><em>Agent</em> - a function that maps input space to output space.</p> + </li> + <li> + <p><em>Objective</em> - a functional that maps an agent to a real number.</p> + </li> + <li> + <p>In <em>auction environments</em>, the input space is a single auction item (say <em>s</em>), and the output space is bidding space <em>B</em>.</p> + </li> + <li> + <p>There are <em>N</em> agents who compete by bidding for an item <em>s</em> using their bidding policy.</p> + </li> + <li> + <p>$b$ is a vector of bids produced by the agents.</p> + </li> + <li> + <p>$v_s$ is a vector of agent’s valuations of item <em>s</em>.</p> + </li> + <li> + <p>The $i^{th}$ agent’s utility is given as $v_s^i \times X^i(b) - P^i(b)$. Here, $X^i(b)$ is the portion of $s$ allocated to $i^{th}$ agent and $P^i(b)$ is the price that $i^{th}$ agent is willing to pay.</p> + </li> +</ul> + +<h2 id="design-choices">Design Choices</h2> + +<ul> + <li> + <p>Each agent is independently maximizing its utility.</p> + </li> + <li> + <p>In certain conditions (i.e., if the auction is dominant strategy incentive compatible), it is optimal for each agent to bid its valuation.</p> + </li> + <li> + <p>These conditions are satisfied by the Vickery auction where $P^i(b)$ is set to be the second-highest bid and $X^i(b) = 1$ if the $i^{th}$ agent wins (and 0 otherwise).</p> + </li> + <li> + <p>A <em>society</em> is a set of agents where each agent is a tuple of bidding policy $\psi$ and a transformation function.</p> + </li> + <li> + <p>The environment is modeled at two levels - (i) global environment (referred to as the global MDP) and local environment (referred to as local auction).</p> + </li> + <li> + <p>Each state $s$ in the global MDP is an auction item in a different auction. The winner (of local auction at $s$) transforms $s$ into some other state $s’$.</p> + </li> + <li> + <p>If these transformations are modeled as actions, then the proposed framework can be interpreted as a decentralized reinforcement learning framework.</p> + </li> + <li> + <p>Motivated by the design of market economy (where economic transactions determine wealth distribution), the paper proposes that, for an agent, the valuation of winning an auction is the revenue it can receive in the auction at the next timestep by selling the transformed state.</p> + </li> + <li> + <p>A global MDP that adhere to this design is referred to as the Market MDP.</p> + </li> + <li> + <p>There is a catch in the design of the market MDP - the winning agent, at time $t-1$, gets the amount that the highest bidder is willing to pay at time $t+1$. But the winner at time $t+1$ only paid the second-highest bid. Hence, the credit is not conserved.</p> + </li> + <li> + <p>This inconsistency can be fixed by introducing “duplicate” (or cloned) agents, and the society is called the Cloned Vickery Society.</p> + </li> + <li> + <p>The Cloned Vickrey Auction mechanism is compared against alternate bidding mechanisms like <em>first price auction</em> (where winner pays the bid they proposed), solitary version of Vickrey auction (no cloning), and <em>Environment Reward</em> where only environment reward is used, and there is no price term.</p> + </li> + <li> + <p>It is empirically shown that Cloned Vickrey Auction learns bids that are most close to their actual valuations. Moreover, solitary version leads bids which are more spread out than the ones learned by cloned version. This highlights the importance of competitive pressure to learn bid values.</p> + </li> + <li> + <p>Three different implementations of Cloned Vickrey Auction are considered:</p> + + <ul> + <li> + <p>Bucket Brigade (BB) - winner at timestep $t$ receives the highest bid at time step $t+1$, and the subsequent winner pays the highest bid. This case satisfies Credit Conservation and Bellman Optimality.</p> + </li> + <li> + <p>Vickrey (V) - winner at timestep $t$ receives the highest bid at time step $t+1$, and the subsequent winner pays the second-highest bid. This case satisfies Truthful Dominant Strategy and Bellman Optimality.</p> + </li> + <li> + <p>Credit Conserving Vickrey (CCV) - winner at timestep $t$ receives the second-highest bid at time step $t+1$, and the subsequent winner pays the second-highest bid. This case satisfies Truthful Dominant Strategy and Credit Conservation.</p> + </li> + </ul> + </li> + <li> + <p>CCV implementation provides bid values closest to the optimal Q-values.</p> + </li> + <li> + <p>In one experiment, the paper explores the use of the proposed approach for selecting between sub-policies. It shows that CVV is more sample efficient for pretraining sub-policies and adapting them to transfer tasks.</p> + </li> + <li> + <p>In another experiment, the task is to transform MNIST images by composing two out of 6 affine transformations. The transformed images are fed to a pretrained classifier that predicts a label. The agent gets a reward of 1 if the classifier makes correct prediction and 0 otherwise. CCV implementation obtains a mean reward of 0.933, thus highlighting the effectiveness of the CCV model.</p> + </li> +</ul> + + + + + When to use parametric models in reinforcement learning? + + 2020-07-02T00:00:00-04:00 + /site/2020/07/02/When to use parametric models in reinforcement learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper compares replay-based approaches with model-based approaches in Reinforcement Learning (RL).</p> + </li> + <li> + <p>It hypothesizes that if the parametric model is only used for generation transitions for the update rule, then under certain conditions, replay-based approaches will be as good as model-based approaches.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1906.05243">Link to the paper</a></p> + </li> +</ul> + +<h2 id="terminology">Terminology</h2> + +<ul> + <li> + <p>Planning: Any algorithm that uses additional computations (but not additional experience) to improve its performance.</p> + </li> + <li> + <p>Learning: Any algorithm that uses additional experience to improve its performance.</p> + </li> + <li> + <p>In some cases, a replay buffer can be seen as a model. For example, querying using state-action pair (from the replay buffer) is similar to querying the (expected) next-state and reward from a model. In general, the model will be more flexible as any arbitrary state-action pair can be used for querying.</p> + </li> +</ul> + +<h2 id="computation-properties">Computation Properties</h2> + +<ul> + <li> + <p>Parametric models require more computation than sampling from a replay buffer. In contrast, the cost of maintaining a replay buffer scales linearly with their capacity.</p> + </li> + <li> + <p>Parametric models are useful for planning multiple-steps into the future while it is much harder to do so with a replay buffer (even more so with pixel observations).</p> + </li> + <li> + <p>An imperfect model maybe be more suitable for selecting actions (instead of updating the policy) because the chosen action, when executed in the environment, will lead to transitions that would improve the model.</p> + </li> + <li> + <p>When planning with an imperfect model, it is better to plan backward, as the update is applied on an imaginary state (which would not be encountered if the model is poor).</p> + </li> + <li> + <p>If the model is accurate, forward and backward planning is equivalent. This distinction between forward and backward updates does not apply to replay buffers.</p> + </li> +</ul> + +<h2 id="failure-to-learn">Failure to learn</h2> + +<ul> + <li> + <p>When using a replay buffer and (i) uniformly replaying transitions, (ii) from a buffer containing only full episodes, and (iii) using TD updates, then the algorithm is stable.</p> + </li> + <li> + <p>When using a replay buffer and (i) uniformly replaying transitions, (ii) generating transitions using a model, and (iii) using TD updates, then the algorithm can diverge.</p> + </li> + <li> + <p>This case can be fixed by:</p> + + <ul> + <li> + <p>Repeatedly interating over the model and sampling transitions <em>to</em> and <em>from</em> the state model generates (not a satisfactory solution).</p> + </li> + <li> + <p>Using multiple-step returns (this can increase the variance).</p> + </li> + <li> + <p>Use algorithms specifically for stable off-policy learning (not a definitive solution).</p> + </li> + </ul> + </li> +</ul> + +<h2 id="model-based-algorithms-at-scale">Model-based algorithms at scale</h2> + +<ul> + <li> + <p>The paper compares against SimPLe (model-based) with Rainbow DQN (replay-based).</p> + </li> + <li> + <p>The paper shows that when using a similar number of real interactions, Rainbow DQN needs fewer replay samples than model samples in SimPLe, making it more efficient (computation-wise).</p> + </li> + <li>Changes to Rainbow DQN: + <ul> + <li>Increase number of steps, for bootstrapping, from 3 to 20.</li> + <li>Reduce the number of steps, before sampling starts from the replay buffer, from 20K to 1600.</li> + </ul> + </li> + <li>With these changes, Rainbow DQN outperforms SimPLe in 17 out of 26 games.</li> +</ul> + +<h2 id="conclusion">Conclusion</h2> + +<ul> + <li> + <p>When using a parametric model in a replay-like setting (sampling observed states from the past), model-based learning can be unstable (in theory). Using a replay buffer is likely a better strategy under the state sampling distribution.</p> + </li> + <li> + <p>Parametric models are likely more useful when:</p> + <ul> + <li>planning backward for credit assignment - even if the model is in-accurate, backward planning will only update fictional states.</li> + <li>planning forward for behavior - the resulting plan is only used to collect real <em>experience</em> in the environment (and not directly update the policy).</li> + </ul> + </li> +</ul> + + + + + Network Randomization - A Simple Technique for Generalization in Deep Reinforcement Learning + + 2020-06-25T00:00:00-04:00 + /site/2020/06/25/Network Randomization-A Simple Technique for Generalization in Deep Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposed a Technique for improving the generalization ability of RL agents when evaluated on an unseen environment (which is similar to the training environment).</p> + </li> + <li> + <p><a href="https://openreview.net/forum?id=HJgcvJBFvB">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/pokaxpoka/netrand">Link to the code</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The key idea is to learn features that are invariant across environments by using a randomized CNN (<em>f</em>) that randomly perturbs the inputs.</p> + </li> + <li> + <p>The policy is trained using the randomized observations obtained using <em>f</em>.</p> + </li> + <li> + <p>Invariant features are learned using a feature matching (FM) loss that matches the feature representation of the original and randomized observations.</p> + </li> + <li> + <p>The random network’s parameters are initialized as $\alpha I + (1 - \alpha) N(0, \sqrt\frac{2}{n_{in} + n_{out}})$ where $\alpha \in [0, 1]$, $N$ denotes the Gaussian Distribution and $n_{in}, n_{out}$ denote the number of input and output channels respectively.</p> + </li> + <li> + <p>Xavier Normal distribution is used for randomization to maintain the variance between the input and the randomized input.</p> + </li> + <li> + <p><em>f</em> is randomized per iteration.</p> + </li> + <li> + <p>During inference, the expected action is computed by approximating over <em>M</em> samples (i.e., randomizing the input <em>M</em> times).</p> + </li> +</ul> + +<h2 id="environments">Environments</h2> + +<ul> + <li> + <p>2D CoinRun, 3D DeepMind Lab, 3D Robotics Control Task</p> + </li> + <li> + <p>The evaluation environments consist of different styles of backgrounds, objects, and floors.</p> + </li> +</ul> + +<h2 id="baselines">Baselines</h2> + +<ul> + <li> + <p>Regularization methods: Dropout, L2 regularization, Batch Normalization</p> + </li> + <li> + <p>Dataset Augmentation methods: Cutout, Gray out, Inversion, Color Jitter</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>On CoinRun, the proposed approaches significantly outperforms the other baselines during evaluation. The performance improvement saturates around 10 <em>M</em> samples.</p> + </li> + <li> + <p>Cycle consistency is used to measure the similarity between two trajectories. The proposed method improves the cycle consistency as compared to the vanilla PPO baseline. It also produces sharper activation maps in the evaluation environments.</p> + </li> + <li> + <p>For the large-scale experiments, when evaluated on 500 levels of CoinRun, the proposed method improves the success rates from 39.8% to 58.7%.</p> + </li> + <li> + <p>On DeepMind Lab and Surreal robotics control tasks, the proposed method leads to agents that generalize better on the unseen environments (during evaluation).</p> + </li> +</ul> + + + + + On the Difficulty of Warm-Starting Neural Network Training + + 2020-06-18T00:00:00-04:00 + /site/2020/06/18/On the Difficulty of Warm-Starting Neural Network Training + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper considers learning scenarios where the training data is available incrementally (and not at once).</p> + </li> + <li> + <p>For example, in some applications, new data is available periodically (e.g., latest news articles come out every day).</p> + </li> + <li> + <p>The paper highlights that, in such scenarios, the conventional wisdom of “warm start” does not apply.</p> + </li> + <li> + <p>When new data is available, it is better to train a new model from scratch than to update the model trained on previously available data.</p> + </li> + <li> + <p>While the two setups lead to similar training performance, the randomly initialized model has a much better generalization performance.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1910.08475">Link to the paper</a></p> + </li> +</ul> + +<h2 id="basic-batch-updating">Basic Batch Updating</h2> + +<ul> + <li> + <p>Create two random, equally-sized partitions of the training data.</p> + </li> + <li> + <p>Train the model till convergence on the first half of the data. Then train the model on the entire dataset.</p> + </li> + <li> + <p>Models: ResNet18, MLPs, Logisitic Regression (LR)</p> + </li> + <li> + <p>Dataset: CIFAR10, CIFAR100, SVHN</p> + </li> + <li> + <p>Optimizers: Adam, SGD</p> + </li> + <li> + <p>Warm starting hurts generalization in all the cases.</p> + </li> + <li> + <p>The effect is more pronounced in the case of ResNets and MLPs (compared to LR) and harder CIFAR 10 dataset (as compared to SVHN dataset).</p> + </li> +</ul> + +<h2 id="online-learning">Online Learning</h2> + +<h3 id="passive-online-learning">Passive Online Learning</h3> + +<ul> + <li> + <p>The model is given access to k new learning examples at each iteration.</p> + </li> + <li> + <p>A warm started model reuses the previously initialized model and trains (till convergence) on the new batch of k items.</p> + </li> + <li> + <p>A “randomly initialized” model is trained on all the examples (seen so far) from scratch.</p> + </li> + <li> + <p>Dataset: CIFAR10</p> + </li> + <li> + <p>Model: ResNet18</p> + </li> + <li> + <p>As more training data becomes available, the generalization gap between the two setups increases, and warmup starts hurting generalization.</p> + </li> +</ul> + +<h3 id="active-online-learning">Active Online Learning</h3> + +<ul> + <li> + <p>In this setup, the learner is trained to sample k new examples to add to the training dataset (using margin-based sampling).</p> + </li> + <li> + <p>Like the previous setup, warmup strategy still hurts generalization.</p> + </li> +</ul> + +<h2 id="transfer-learning">Transfer Learning</h2> + +<ul> + <li> + <p>Train a Resnet18 model on the CIFAR10 dataset and use this model to warm start training on the SVHN dataset.</p> + </li> + <li> + <p>When a small percentage of the SVHN dataset is used, the setup resembles pretraining / transfer learning and performs better than training from scratch.</p> + </li> + <li> + <p>As the percentage of the SVHN dataset increases, the warmup approach starts underperforming.</p> + </li> +</ul> + +<h2 id="overcoming-warm-start-problem">Overcoming warm start problem</h2> + +<ul> + <li> + <p>ResNet18 model on CIFAR10 dataset</p> + </li> + <li> + <p>When performing a hyper-parameter sweep over the learning rate and batch size, it is possible to train warm start models to reach the same generalization performance as training from scratch.</p> + </li> + <li> + <p>Though, in that case, there are no computational savings as the warm-started models take about the same time (to converge) as the randomly initialized model.</p> + </li> + <li> + <p>The increased training time indicates that the warm started model probably needs to forget the knowledge from previous training rounds.</p> + </li> + <li> + <p>Warm start Resnet models, that generalize well, have a low correlation to their initialization stage (measured via Pearson correlation coefficient between the model weights).</p> + </li> + <li> + <p>Generalization is damaged even when using a model trained on incomplete data for only a few epochs.</p> + </li> + <li> + <p>For warm start models, the gradient (corresponding to the “new” data) is higher than that for randomly initialized models. This hints that regularisation may help to close the generalization gap. But in practice, regularization helps both the warmup and randomly initialized model.</p> + </li> + <li> + <p>Warm starting only a few layers also does not close the gap.</p> + </li> + <li> + <p>Adding some noise to the warm started model (with the motivation of having a partially random initialization) does help somewhat but also increases the training time.</p> + </li> + <li> + <p>Motivating the problem as an instance of catastrophic forgetting, the authors use the EWC algorithm but report that using EWC hurts model performance.</p> + </li> + <li> + <p>The paper does not propose a solution to the problem but provides a thorough analysis of the problem setup, which is quite useful for understanding the phenomenon itself.</p> + </li> +</ul> + + + + + Supervised Contrastive Learning + + 2020-04-30T00:00:00-04:00 + /site/2020/04/30/Supervised Contrastive Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper builds on the prior work on self-supervised contrastive learning and extends it for the supervised learning case where many positive examples are available for each anchor.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/2004.11362">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li>The representation learning framework has the following components:</li> +</ul> + +<h3 id="data-augmentation-module">Data Augmentation Module</h3> + +<ul> + <li> + <p>This module transforms the input example. The paper considers the following strategies:</p> + + <ul> + <li>Random crop, followed by resizing</li> + <li><a href="https://arxiv.org/abs/1805.09501">Auto Augment</a> - A method to search for data augmentation strategies.</li> + <li><a href="https://arxiv.org/abs/1909.13719">Rand Augment</a> - Randomly sampling a sequence of data augmentations, with repetition</li> + <li>SimAugment - Sequentially apply random color distortion and Gaussian blurring, followed by probabilistic sparse image wrap.</li> + </ul> + </li> +</ul> + +<h3 id="encoder-network">Encoder Network</h3> + +<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* This module maps the input to a latent representation. + +* The same network is used to encode both the anchor and the sample. + +* The representation vector is normalized to lie on the unit hypersphere. +</code></pre></div></div> + +<h3 id="projection-network">Projection Network</h3> + +<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* This module maps the normalized representation to another representation, on which the contrastive loss is computed. + +* This network is only used for training the supervised contrastive loss. +</code></pre></div></div> + +<h3 id="loss-function">Loss function</h3> + +<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* The paper extends the standard contrastive loss formulation to handle multiple positive examples. + +* The main effect is that the modified loss accounts for all the same-class pairs (from within the sampled batch as well as the augmented batch). + +* The paper shows that the gradient (corresponding to the modified loss) causes the learning to focus more on hard examples. "Hard" cases are the ones where contrasting the anchor benefits the encoder more. + +* The proposed loss can also be seen as a generalization of the triplet loss. +</code></pre></div></div> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Dataset - ImageNet</p> + </li> + <li> + <p>Models - ResNet50, ResNet200</p> + </li> + <li> + <p>The network is “pretrained” using supervised contrastive loss.</p> + </li> + <li> + <p>After pre-training, the projection network is removed, and a linear classifier is added.</p> + </li> + <li> + <p>This classifier is trained with the CE loss while the rest of the network is kept fixed.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Using supervised contrastive loss improves over all the baseline models and data augmentation approaches.</p> + </li> + <li> + <p>The resulting classifier is more robust to image corruptions, as shown by the mean Corruption Error (mCE) metric on the ImageNet-C dataset.</p> + </li> + <li> + <p>The model is more stable to the choice oh hyperparameter values (like optimizers, data augmentation, and learning rates).</p> + </li> +</ul> + +<h2 id="training-details">Training Details</h2> + +<ul> + <li> + <p>Supervised Contrastive loss is trained for 700 epochs during pre-training.</p> + </li> + <li> + <p>Each step is about 50% more expensive than performing CE.</p> + </li> + <li> + <p>The dense classifier layer can be trained in as few as ten epochs.</p> + </li> + <li> + <p>The temperature value is set to 0.07. Using a lower temperature is better than using a higher temperature.</p> + </li> +</ul> + + + + + + CURL - Contrastive Unsupervised Representations for Reinforcement Learning + + 2020-04-09T00:00:00-04:00 + /site/2020/04/09/CURL Contrastive Unsupervised Representations for Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a contrastive learning approach, called CURL, for performing off-policy control from raw pixel observations (by transforming them into high dimensional features).</p> + </li> + <li> + <p>The idea is motivated by the application of contrastive losses in computer vision. But there are additional challenges:</p> + + <ul> + <li> + <p>The learning agent has to perform both unsupervised and reinforcement learning.</p> + </li> + <li> + <p>The “dataset” for unsupervised learning is not fixed and keeps changing with the policy of the agent.</p> + </li> + </ul> + </li> + <li> + <p>Unlike prior work, CURL introduces fewer changes in the underlying RL pipeline and provides more significant sample efficiency gains. For example, CURL (trained on pixels) nearly matches the performance of SAC policy (trained on state-based features).</p> + </li> + <li> + <p><a href="https://github.com/MishaLaskin/curl">Link to the paper</a></p> + </li> +</ul> + +<h2 id="implementation">Implementation</h2> + +<ul> + <li> + <p>CURL uses instance discrimination. Deep RL algorithms commonly use a stack of temporally consecutive frames as input to the policy. In such cases, instance discrimination is applied to all the images in the stack.</p> + </li> + <li> + <p>For generating the positive and negative samples, random crop data augmentation is used.</p> + </li> + <li> + <p>Bilinear inner product is used as the similarity metric as it outperforms the commonly used normalized dot product.</p> + </li> + <li> + <p>For encoding the anchors and the samples, InfoNCE is used. It learns two encoders $f_q$ and $f_k$ that transform the query (base input) and the key (positive/negative samples) into latent representations. The similarity loss is applied to these latents.</p> + </li> + <li> + <p>Momentum contrast is used to update the parameters ($\theta_k$) of the $f_k$ network. ie $\theta_k = m \theta_k + (1-m) \theta_q$. $\theta_q$ are the parameters of the $f_q$ network and are updated in the usual way, using both the contrastive loss and the RL loss.</p> + </li> +</ul> + +<h2 id="experiment">Experiment</h2> + +<ul> + <li> + <p>DMControl100K and Atart100K refer to the setups where the agent is trained for 100K steps on DMControl and Atari, respectively.</p> + </li> + <li> + <p>Metrics:</p> + + <ul> + <li> + <p>Sample Efficiency - How many steps does the baseline need to match CURL’s performance after 100K steps.</p> + </li> + <li> + <p>Performance - Ratio of episodic returns by CURL vs. the baseline after 100K steps.</p> + </li> + </ul> + </li> + <li> + <p>Baselines:</p> + + <ul> + <li> + <p>DMControl</p> + + <ul> + <li><a href="https://arxiv.org/abs/1910.01741">SAC-AE</a></li> + <li><a href="https://arxiv.org/abs/1907.00953">SLAC</a></li> + <li><a href="https://planetrl.github.io/">PlaNet</a></li> + <li><a href="https://openreview.net/forum?id=S1lOTC4tDS">Dreamer</a></li> + <li><a href="https://arxiv.org/abs/1812.05905">Pixel SAC</a></li> + <li>SAC trained on state-space observations</li> + </ul> + </li> + <li> + <p>Atari</p> + + <ul> + <li><a href="https://arxiv.org/abs/1903.00374">SimPLe</a></li> + <li><a href="https://arxiv.org/abs/1710.02298">RainbowDQN</a></li> + <li><a href="https://openreview.net/forum?id=Bke9u1HFwB">OTRainbow (Over Trained Rainbow)</a></li> + <li><a href="https://arxiv.org/abs/1906.05243">Efficient Rainbow</a></li> + <li>Random Agent</li> + <li>Human Performance</li> + </ul> + </li> + </ul> + </li> + <li> + <p>Results</p> + + <ul> + <li> + <p>DM Control</p> + + <ul> + <li> + <p>CURL outperforms all pixel-based RL algorithms by a significant margin for all environments on DMControl and most environments on Atari.</p> + </li> + <li> + <p>On DMControl, it closely matches the performance of the SAC agent trained on state-space observations.</p> + </li> + <li> + <p>On Atari, it achieves better median human normalizes score (HNS) than the other baselines and close to human efficiency in three environments.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + + + + + Competitive Training of Mixtures of Independent Deep Generative Models + + 2020-03-12T00:00:00-04:00 + /site/2020/03/12/Competitive Training of Mixtures of Independent Deep Generative Models + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a Competitive training mechanism to train a mixture of independent generative models.</p> + </li> + <li> + <p>The idea is that this mixture of different models would divide the data distribution amongst themselves and specialize to their respective splits.</p> + </li> + <li> + <p>The training procedure is related to clustering-based methods.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1804.11130">Link to the paper</a></p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>In causal modeling, a common assumption is that the data is generated by a set of independent mechanisms.</p> + </li> + <li> + <p>It is not known which mechanism generates which datapoint and recovering the underlying mechanisms can be modeled as learning a structural causal generative model.</p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>The paper assumes that the support of the different generators do not overlap, i.e., the underlying data distribution is factorized into non-overlapping regions.</p> + </li> + <li> + <p>This data factorization is learned using a set of discriminators.</p> + </li> + <li> + <p>If there are $k$ generators, $k$ binary partition functions $c_i, … c_k$ are used.</p> + </li> + <li> + <p>For a given datapoint $x$, if $c_i(x) = 1$ then $c_j(x) = 0$ for all other $j$ and $x$ is assigned to $i^{th}$ generator.</p> + </li> + <li> + <p>For a fixed partition function $c_j^t$ ($t$ denotes the partition function at time $t$), minimize the sum of f-divergence between the model and the data distribution (that is assigned to it). The loss formulation is an upper bound on the f-divergence of the mixture model.</p> + </li> + <li> + <p>In the next step, the data points are re-assigned to the generative models, based on the likelihood of each data point for each model.</p> + </li> + <li> + <p>The likelihood is estimated by training a discriminator that can distinguish the generated samples from the real samples.</p> + </li> +</ul> + +<h3 id="independence-as-an-inductive-bias">Independence as an inductive bias</h3> + +<ul> + <li> + <p>The independence assumption may be too restrictive because the low-level features will be common across the distribution splits.</p> + </li> + <li> + <p>This “violation” can be avoided by pretraining the model using a uniform random split of the dataset. In that case, the independence assumption will hold approximately after pretraining.</p> + </li> + <li> + <p>Another approach could be to share some parameters across the models.</p> + </li> + <li> + <p>A “load balancing” approach is also used where each model always keeps training on the data points assigned to it if not enough data points are assigned to it.</p> + </li> +</ul> + +<h3 id="comparison-to-vaes-and-gans">Comparison to VAEs and GANs</h3> + +<ul> + <li> + <p>VAEs tend to be “overly inclusive” of the training distribution, i.e., they try to cover the entire support of the distribution.</p> + </li> + <li> + <p>GANs are prone to mode collapse where the model focuses only on one part of the distribution.</p> + </li> + <li> + <p>The proposed method provides a middle ground where the different generative models can focus on different parts of the distribution.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>The experiments seem to be limited. The paper shows that their proposed setup improves over the VAE and GAN baselines.</p> + </li> + <li> + <p>For datasets, the paper uses two-dimensional synthetic data, MNIST and CelebA</p> + </li> +</ul> + + + + + What Does Classifying More Than 10,000 Image Categories Tell Us? + + 2020-03-05T00:00:00-05:00 + /site/2020/03/05/What Does Classifying More Than 10,000 Image Categories Tell Us + <ul> + <li> + <p>The paper is among the first to study image classification at a large scale (10000 classes and 9 million examples).</p> + </li> + <li> + <p>This is a relatively old paper (2010). Some of the findings may not be relevant anymore. For instance, specific scaling challenges have been significantly overcome. Moreover, the paper uses approaches like SVM and KNN (popular at that time) and not use CNNs.</p> + </li> + <li> + <p>Other observations of the paper are still very relevant, and it is an educating paper. For example, since ImagetNet classes are based on WordNet, the paper looks at the effect of semantic relations (tree) of categories on the performance of the training models.</p> + </li> + <li> + <p><a href="http://openaccess.thecvf.com/content_cvpr_2015/papers/Jain_What_do_15000_2015_CVPR_paper.pdf">Link to the paper</a></p> + </li> + <li> + <p>The paper considers three variants of the ImageNet dataset - ImageNet 10K (10184 classes), ImageNet 7K (7404 classes) and ImageNet 1K (1000 classes).</p> + </li> + <li> + <p>They also consider smaller variants with randomly sampled classes or cases where the examples are sampled from one high-level category like vehicles.</p> + </li> + <li> + <p>SVM and KNN models are used with features like Bag of Words, GIST descriptors, and spatial pyramid of histograms.</p> + </li> + <li> + <p>Observations</p> + + <ul> + <li> + <p>A model that performs well on the smaller dataset (with fewer classes) may not perform well on the larger dataset (with more classes).</p> + </li> + <li> + <p>There seems to be an approximate correlation between the structure of the semantic hierarchy of the labels (obtained via WordNet) and visual confusion between the categories.</p> + </li> + <li> + <p>For example, consider two high-level concepts - says artifacts and animals. The model is less likely to confuse between the classes across the high-level concepts but more likely to confuse between the classes in the respective concepts.</p> + </li> + <li> + <p>For dense categories (categories where the classes are semantically more closely related to each other), the model tends to make more mistakes (even if the number of classes is fewer).</p> + </li> + <li> + <p>Accounting for the label hierarchy (in the loss function) improves the classification performance.</p> + </li> + </ul> + </li> +</ul> + + + + + mixup - Beyond Empirical Risk Minimization + + 2020-02-27T00:00:00-05:00 + /site/2020/02/27/mixup Beyond Empirical Risk Minimization + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a simple and dataset-agnostic data augmentation mechanism called <em>mixup</em>.</p> + </li> + <li> + <p><a href="">Link to the paper</a></p> + </li> + <li> + <p>Consider two training examples, $(x_1, y_1)$ and $(y_1, y_2)$, where $x_1$ and $x_2$ are the datapoints and $y_1$ and $y_2$ are the labels.</p> + </li> + <li> + <p>New training examples of the form $(\lambda \times x_1 + (1-\lambda) \times x_2, \lambda \times y_1 + (1-\lambda) \times y_2)$ are constructured by considering the linear interpolation of the datapoints and the labels. Here $\lambda \in [0, 1]$.</p> + </li> + <li> + <p>$\lambda$ is sampled from a Beta distribution $Beta(\alpha, \alpha)$ where $\alpha \in (0, \infty)$.</p> + </li> + <li> + <p>Setting $\lambda$ to 0 or 1 eliminates the effect of <em>mixup</em>.</p> + </li> + <li> + <p>Mixup encourages the neural network to favor linear behavior between the training examples.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p><strong>Supervised Learning</strong></p> + + <ul> + <li> + <p>ImageNet for ResNet-50, ResNet-101 and ResNext-101.</p> + </li> + <li> + <p>CIFAR10/CIFAR100 for PreAct ResNet-18, WideResNet-28-10 and DenseNet.</p> + </li> + <li> + <p>Google command dataset for LeNet and VGG.</p> + </li> + </ul> + </li> + <li> + <p>In all these setups, adding <em>mixup</em> improves the performance of the model.</p> + </li> + <li> + <p><em>Mixup</em> makes the model more robust to noisy labels. Moreover, <em>mixup</em> + dropout improves over <em>mixup</em> alone. This hints that <em>mixup</em>’s benefits are complementary to those of dropout.</p> + </li> + <li> + <p><em>Mixup</em> makes the network more robust to adversarial examples in both white-box and black-box settings (ImageNet + Resnet101).</p> + </li> + <li> + <p><em>Mixup</em> also stabilizes the training of GANs by acting as a regularizer for the gradient of the discriminator.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>Convex combination of three or more examples (with weights sampled from a Dirichlet distribution) does not provide gains over the case of two examples.</p> + </li> + <li> + <p>In the authors’ implementation, <em>mixup</em> is applied between images of the same batch (after shuffling).</p> + </li> + <li> + <p>Interpolating only between inputs, with the same labels, did not lead to the same kind of gains as <em>mixup</em>.</p> + </li> +</ul> + + + + + ELECTRA - Pre-training Text Encoders as Discriminators Rather Than Generators + + 2020-02-20T00:00:00-05:00 + /site/2020/02/20/ELECTRA - Pre-training Text Encoders as Discriminators Rather Than Generators + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Masked Language Modeling (MLM) is a common technique for pre-training language-based models. The idea is to “corrupt” some tokens in the input text (around 15%) by replacing them with the [MASK] token and then training the network to reconstruct (or predict) the corrupted tokens.</p> + </li> + <li> + <p>Since the network learns from only about 15% of the tokens, the computational cost of training using MLM can be quite high.</p> + </li> + <li> + <p>The paper proposes to use a “replaced token detection” task where some tokens in the input text are replaced by other plausible tokens.</p> + </li> + <li> + <p>For each token in the modified text, the network has to predict if the token has been replaced or not.</p> + </li> + <li> + <p>The alternative token is generated using a small generator network.</p> + </li> + <li> + <p>Unlike the previous MLM setup, the proposed task is defined for all the input tokens, thus utilizing the training data more efficiently.</p> + </li> + <li> + <p><a href="https://openreview.net/forum?id=r1xMH1BtvB">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The proposed approach is called ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately)</p> + </li> + <li> + <p>Two neural networks - Generator (G) and Discriminator (D) are trained.</p> + </li> + <li> + <p>Each network has a Transformer-based text encoder that maps a sequence of words into a sequence of vectors.</p> + </li> + <li> + <p>Given an input sequence x (of length N), k indices are chosen for replacing the tokens.</p> + </li> + <li> + <p>For each index, the generator produces a distribution over tokens. A token is sampled to replace in the original sequence. The resulting sequence is referred to as the corrupted sequence.</p> + </li> + <li> + <p>Given the corrupted sequence, the Discriminator predicts which token comes from the data distribution and which comes from the generator.</p> + </li> + <li> + <p>The generator is trained using the MLM setup, and the Discriminator is trained using the discriminative loss.</p> + </li> + <li> + <p>After pre-training, only the Discriminator is finetuned on the downstream tasks.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets</p> + + <ul> + <li> + <p>GLUE Benchmark</p> + </li> + <li> + <p>Stanford QA dataset</p> + </li> + </ul> + </li> + <li> + <p>Architecture Choices</p> + + <ul> + <li> + <p>Sharing word embeddings between generator and Discriminator helps.</p> + </li> + <li> + <p>Tying all the encoder weights leads to marginal improvement but forces the generator and the Discriminator to be of the same size. Hence only embeddings are shared.</p> + </li> + <li> + <p>Generator model is kept smaller than the discriminator model as a strong generator can make the training difficult for the Discriminator.</p> + </li> + <li> + <p>A two-stage training procedure was explored where only the generator is trained for n steps. Then the weights of the generator are used to initialize the Discriminator. The Discriminator is then trained for n steps while keeping the generator fixed.</p> + </li> + <li> + <p>This two-stage setup provides a nice curriculum for the Discriminator but does not outperform the joint training based setup.</p> + </li> + <li> + <p>An adversarial loss based setup is also explored but it does not work well probably because of the following reasons:</p> + + <ul> + <li> + <p>Adverserially trained generator is not as good as the MLM generator.</p> + </li> + <li> + <p>Adverserially trained generator produces a low entropy output distribution.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Results</p> + + <ul> + <li>Both small and large ELECTRA models outperform baselines models like <a href="https://arxiv.org/abs/1810.04805">BERT</a>, <a href="https://arxiv.org/abs/1907.11692">RoBERTa</a>, <a href="https://arxiv.org/abs/1802.05365">ELMo</a> and <a href="https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf">GPT</a>.</li> + </ul> + </li> + <li> + <p>Ablations</p> + + <ul> + <li> + <p>ELECTRA-15 is a variant of ELECTRA where the Discriminator is trained on only 15% of the tokens (similar to the MLM setup). This reduces performance significantly.</p> + </li> + <li> + <p>Replace MLM setup</p> + + <ul> + <li> + <p>Perform MLM training, but instead of using [MASK], use a toke sampled from the generator.</p> + </li> + <li> + <p>This improves the performance marginally.</p> + </li> + </ul> + </li> + <li> + <p>All-token MLM</p> + + <ul> + <li> + <p>In the MLM setup, replace the [MASK] token by the sampled tokens and train the MLM model to generate all the words.</p> + </li> + <li> + <p>In practice, the MLM model can either generate a word or copy the existing word.</p> + </li> + <li> + <p>This approach closes much of the gap between BERT and ELECTRA.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Interestingly, ELECTRA outperforms All-token MLM BERT suggesting the ELECTRA may be benefitting from parameter efficiency since it does not have to learn a distribution over all the words.</p> + </li> +</ul> + + + + + Gradient based sample selection for online continual learning + + 2020-02-13T00:00:00-05:00 + /site/2020/02/13/Gradient based sample selection for online continual learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Use of replay buffer (and rehearsal) is a common technique for mitigating catastrophic forgetting.</p> + </li> + <li> + <p>The paper builds on this idea but focuses on the sample selection aspect ie, which data points to store in the replay buffer.</p> + </li> + <li> + <p>It formulates sample selection as a constraint minimization problem and shows that the proposed formulation is equivalent to maximizing the diversity of the samples with respect to parameter gradient.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1903.08671">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Supervised learning tasks</p> + </li> + <li> + <p>Online stream of data (i.e., one or few datapoints accessed at a time).</p> + </li> + <li> + <p>When considering the $t^{th}$ task, the objective is: minimize the loss on the current task without increasing the loss on any of the previous tasks.</p> + </li> + <li> + <p>The above constraint can be rephrased as $dot(g_t, g_i) \gt 0 \forall i \in [0, t-1]$ where $g_t$ is the gradient for the $t^{th}$ task.</p> + </li> + <li> + <p>This is equivalent to saying that the current task gradient should not interfere negatively with the previous task gradient.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>In practice, the gradient constraint is enforced only over the examples in the minibatch (and not the full dataset).</p> + </li> + <li> + <p>The paper interprets the constraint satisfaction problem as approximating an optimal feasible region (in the gradient space) where current task performance can be improved without hurting the performance on the previous tasks.</p> + </li> + <li> + <p>The approximate region (of the shape of a polyhedral convex cone) is determined using only the examples from the replay buffer. Hence, the optimal region (defined for the entire dataset) would be contained within the approximate region.</p> + </li> + <li> + <p>The size of the approximate region can be measured in terms of the solid angle defined by the intersection between the approximate region and a unit sphere.</p> + </li> + <li> + <p>The paper argues that the approximate region can be made smaller by reducing the angle between each pair of gradients.</p> + </li> + <li> + <p>The set of points, satisfying the constraint, can be computed using the Integer Quadratic Programming (IQP).</p> + </li> + <li> + <p>Given that the problem setup is online learning, using IDP for every new data point is not feasible.</p> + </li> + <li> + <p>An in-exact, greedy alternative is suggested where a score is maintained for each example in the buffer.</p> + </li> + <li> + <p>When a new datapoint comes in, the score is computed and used to decide if the existing datapoint in the buffer should be replaced.</p> + </li> + <li> + <p>The score is the maximal cosine similarity of the current example with a random sample in the buffer.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Benchmarks</p> + + <ul> + <li> + <p>Disjoint MNIST</p> + </li> + <li> + <p>Permuted MNIST</p> + </li> + <li> + <p>Disjoint CIFAR10</p> + </li> + </ul> + </li> + <li> + <p>Shared head setup</p> + </li> + <li> + <p>Baselines for sample selection</p> + + <ul> + <li> + <p>Randomly select examples to keep in the buffer.</p> + </li> + <li> + <p>Perform clustering - either in the feature space or in the gradient space.</p> + </li> + <li> + <p>Use IQP to select the examples. This approach is not used for CIFAR10, as it is computationally costly.</p> + </li> + <li> + <p>It would be interesting if the paper had considered baselines like selecting samples which had the largest loss.</p> + </li> + </ul> + </li> + <li> + <p>The proposed greedy approach outperforms the other methods.</p> + </li> + <li> + <p>In an ablation experiment, the paper shows that the proposed approach works better than reservoir sampling (when the underlying data distribution is imbalanced).</p> + </li> + <li> + <p>Another experiment compares the proposed approach with <a href="https://papers.nips.cc/paper/7225-gradient-episodic-memory-for-continual-learning.pdf">Gradient Episodic Memory</a> and <a href="https://arxiv.org/abs/1611.07725">iCaRL</a>. For Permuted and Disjoint MNIST, the different methods perform quite similar though the proposed approach performs better on Disjoint CIFAR10.</p> + </li> +</ul> + + + + + + Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One + + 2020-02-06T00:00:00-05:00 + /site/2020/02/06/Your Classifier is Secretly an Energy-Based Model, and You Should Treat it Like One + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposed a framework for joint modeling of labels and data by interpreting a discriminative classifier <em>p(y|x)</em> as an energy-based model <em>p(x, y)</em>.</p> + </li> + <li> + <p>Joint modeling provides benefits like improved calibration (i.e., the predictive confidence should align with the miss classification rate), robustness, and out of order distribution.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1912.03263">Link to the paper</a></p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>Consider a standard classifier $f_{\theta}(x)$ which produces a k-dimensional vector of logits.</p> + </li> + <li> + <p>$p_{\theta}(y | x) = softmax(f_{\theta}(x)[y])$</p> + </li> + <li> + <p>Uisng concepts from energy based models, we write $p_{\theta}(x, y) = \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}$ where $E_{\theta}(x, y) = -f_{\theta}(x)[y]$</p> + </li> + <li> + <p>$p_{\theta}(x) = \sum_{y}{ \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}}$</p> + </li> + <li> + <p>$E_{\theta}(x) = -LogSumExp_y(f_{\theta}(x)[y])$</p> + </li> + <li> + <p>Note that in the standard discriminative setup, shiting the logits $f_{\theta}(x)$ does not affect the model but it affects $p_{\theta}(x)$.</p> + </li> + <li> + <p>Computing $p_{\theta}(y | x)$ using $p_{\theta}(x, y)$ and $p_{\theta}(x)$ gives back the same softmax parameterization as before.</p> + </li> + <li> + <p>This reinterpreted classifier is referred to as a Joint Energy-based Model (JEM).</p> + </li> +</ul> + +<h2 id="optimization">Optimization</h2> + +<ul> + <li> + <p>The log-liklihood of the data can be factoized as $log p_{\theta}(x, y) = log p_{\theta}(x) + log p_{\theta}(y | x)$.</p> + </li> + <li> + <p>The second factor can be trained using the standard CE loss. In contrast, the first factor can be trained using a sampler based on Stochastic Gradient Langevin Dynamics.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<h3 id="hybrid-modelling">Hybrid Modelling</h3> + +<ul> + <li> + <p>Datasets: CIFAR10, CIFAR100, SVHN.</p> + </li> + <li> + <p>Metrics: Inception Score, Frechet Inception Distance</p> + </li> + <li> + <p>JEM outperforms generative, discriminative, and hybrid models on both generative and discriminative tasks.</p> + </li> +</ul> + +<h3 id="calibration">Calibration</h3> + +<ul> + <li> + <p>A calibrated classifier is the one where the predictive confidence aligns with the misclassification rate.</p> + </li> + <li> + <p>Dataset: CIFAR100</p> + </li> + <li> + <p>JEM improves calibration while retaining high accuracy.</p> + </li> +</ul> + +<h3 id="out-of-distribution-ood-detection">Out of Distribution (OOD) Detection</h3> + +<ul> + <li> + <p>One way to detect OOD samples is to learn a density model that assigns a higher likelihood to in-distribution examples and lower likelihood to out of distribution examples.</p> + </li> + <li> + <p>JEM consistently assigns a higher likelihood to in-distribution examples.</p> + </li> + <li> + <p>The paper also proposes an alternate metric called <em>approximate mass</em> to detect OOD examples.</p> + </li> + <li> + <p>The intuition is that a point could have likelihood but be impossible to sample because its surroundings have a very low density.</p> + </li> + <li> + <p>On the other hand, the in-distribution data points would lie in a region of high probability mass.</p> + </li> + <li> + <p>Hence the norm of the gradient of log density could provide a useful signal to detect OOD examples.</p> + </li> +</ul> + +<h3 id="robustness">Robustness</h3> + +<ul> + <li>JEM is more robust to adversarial attacks as compared to discriminative classifiers.</li> +</ul> + + + + + Massively Multilingual Neural Machine Translation in the Wild - Findings and Challenges + + 2020-01-30T00:00:00-05:00 + /site/2020/01/30/Massively Multilingual Neural Machine Translation in the Wild-Findings and Challenges + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes to build a universal neural machine translation system that can translate between any pair of languages.</p> + </li> + <li> + <p>As a concrete instance, the paper prototypes a system that handles 103 languages (25 Billion translation pairs).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1907.05019">Link to the paper</a></p> + </li> +</ul> + +<h2 id="why-universal-machine-translation">Why universal Machine Translation</h2> + +<ul> + <li> + <p>Hypothesis: <em>The learning signal from one language should benefit the quality of other languages</em><a href="https://link.springer.com/article/10.1023/A:1007379606734">1</a></p> + </li> + <li> + <p>This positive transfer is evident for low resource languages but tends to hurt the performance for high resource languages.</p> + </li> + <li> + <p>In practice, adding new languages reduces the effective per-task capacity of the model.</p> + </li> +</ul> + +<h2 id="desiderata-for-multilingual-translation-model">Desiderata for Multilingual Translation Model</h2> + +<ul> + <li> + <p>Maximize the number of languages within one model.</p> + </li> + <li> + <p>Maximize the positive transfer to low resource languages.</p> + </li> + <li> + <p>Minimize the negative interference to high resource languages.</p> + </li> + <li> + <p>Perform well ion the realistic, multi-domain settings.</p> + </li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li> + <p>In-house corpus generated by crawling and extracting parallel sentences from the web.</p> + </li> + <li> + <p>102 languages, with 25 billion sentence pairs.</p> + </li> + <li> + <p>Compared with the existing datasets, this dataset is much larger, spans more domains, has a good variation in the amount of data available for different language pairs, and is noisier. These factors bring additional challenges to the universal NMT setup.</p> + </li> +</ul> + +<h2 id="baselines">Baselines</h2> + +<ul> + <li> + <p>Dedicated Bilingual models (variants of Transformers).</p> + </li> + <li> + <p>Most bilingual experiments used Transformer big and a shared source-target sentence-piece model (SPE).</p> + </li> + <li> + <p>For medium and low resource languages, the Transformer Base was also considered.</p> + </li> + <li> + <p>Batch size of 1 M tokes per-batch. Increasing the batch size improves model quality and speeds up convergence.</p> + </li> +</ul> + +<h2 id="effect-of-transfer-and-interference">Effect of Transfer and Interference</h2> + +<ul> + <li> + <p>The paper compares the following two setups with the baseline:</p> + + <ul> + <li> + <p>Combine all the datasets and train over them as if it is a single dataset.</p> + </li> + <li> + <p>Combine all the datasets but upsample low resource languages so all that all the languages are equally likely to appear in the combined dataset.</p> + </li> + </ul> + </li> + <li> + <p>A target “index” is prepended with every input sentence to indicate which language it should be translated into.</p> + </li> + <li> + <p>Shared encoder and decoder are used across all the language pairs.</p> + </li> + <li> + <p>The two setups use a batch size of 4M tokens.</p> + </li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li> + <p>When all the languages are equally sampled, the performance on the low resource languages increases, at the cost of performance on high resource languages.</p> + </li> + <li> + <p>Training over all the data at once reverse this trend.</p> + </li> +</ul> + +<h3 id="countering-interference">Countering Interference</h3> + +<ul> + <li> + <p>Temperature based sampling strategy is used to control the ratio of samples from different language pairs.</p> + </li> + <li> + <p>A balanced sampling strategy improves the performance for the high resource languages (though not as good as the multilingual baselines) while retaining the high transfer performance on the low resource languages.</p> + </li> + <li> + <p>Another reason behind the lagging performance (as compared to bilingual baselines) is the capacity of the multilingual models.</p> + </li> + <li> + <p>Some open problems to consider:</p> + + <ul> + <li> + <p>Task Scheduling - How to decide the order in which different language pairs should be trained.</p> + </li> + <li> + <p>Optimization for multitask learning - How to design optimizer, loss functions, etc. that can exploit task similarity.</p> + </li> + <li> + <p>Understanding Transfer:</p> + + <ul> + <li> + <p>For the low resource languages, translating multiple languages to English leads to improved performance than translating English to multiple languages.</p> + </li> + <li> + <p>This can be explained as follows: In the first case (many-to-one), the setup is that of a multi-domain model (each source language is a domain). In the second case (one-to-many), the setup is that of multitasking.</p> + </li> + <li> + <p>NMT models seem to be more amenable to transfer across multiple domains than transfer across tasks (since the decoder distribution does not change much).</p> + </li> + <li> + <p>In terms of zero-shot performance, the performance for most language pairs increases as the number of languages change from 10 to 102.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + +<h2 id="effect-of-preprocessing-and-vocabulary">Effect of preprocessing and vocabulary</h2> + +<ul> + <li> + <p>Sentence Piece Model (SPM) is used.</p> + </li> + <li> + <p>Temperature sampling is used to sample vocabulary from different languages.</p> + </li> + <li> + <p>Using smaller vocabulary (and hence smaller sub-word tokens) perform better for low resource languages, probably due to improved generalization.</p> + </li> + <li> + <p>Low and medium resource languages tend to perform better with higher temperatures.</p> + </li> +</ul> + +<h2 id="effect-of-capacity">Effect of Capacity</h2> + +<ul> + <li>Using deeper models improves performance (as compared to the wider models with the same number of parameters) on most language pairs.</li> +</ul> + + + + + Observational Overfitting in Reinforcement Learning + + 2020-01-23T00:00:00-05:00 + /site/2020/01/23/Observational Overfitting in Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper studies <em>observational overfitting</em>: The phenomenon where an agent overfits to different observation spaces even though the underlying MDP remains fixed.</p> + </li> + <li> + <p>Unlike other works, the “background information” (in the pixel space) is correlated with the progress of the agent (and is not just noise).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1912.02975">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Base MDP $M = (S, A, R, T)$ where $S$ is the state space, $A$ is the action space, $R$ is the reward function, and $T$ is the transition dynamics.</p> + </li> + <li> + <p>$M$ is parameterized using $\theta$. In practice, it means introducing an observation function $\phi_{\theta}$ ie $M_{\theta} = (M, \phi_{\theta})$.</p> + </li> + <li> + <p>A distribution over $\theta$ defines a distribution over the MDPs.</p> + </li> + <li> + <p>The learning agent has access to the pixel space observations and not the state space observations.</p> + </li> + <li> + <p>Generalization gap is defined as $J_{\theta}(\pi) - J_{\theta^{train}}(\pi)$ where $\pi$ is the learning agent, $\theta$ is the distribution over all the observation functions, $\theta^{train}$ is the distribution over the observation functions corresponding to the training environments. $J_{\theta}(\pi)$ is the average reward that the agent obtains over environments sampled from $M_{\theta}$.</p> + </li> + <li> + <p>$\phi_{\theta}$ considers two featurs - generalizable (invariant across $\theta$) and non-generalizable (depends on $\theta$) ie $\phi_{\theta}(s) = concat(f(s), g_{\theta}(s))$ where $f$ is the invariant function and $g$ is the non-generalizable function.</p> + </li> + <li> + <p>The problem is set up such that “explicit regularization” can easily solve it. The focus is on understanding the effect of “implicit regularization”.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="overparameterized-lqr">Overparameterized LQR</h3> + +<ul> + <li> + <p>LQR is used as a proxy for deep RL architectures given its advantages like enabling exact gradient descent.</p> + </li> + <li> + <p>The functions are parameterized as follows:</p> + + <ul> + <li> + <p>$f(s) = W_c(s)$</p> + </li> + <li> + <p>$g_{\theta}(s) = W_{\theta}(s)$</p> + </li> + </ul> + </li> + <li> + <p>Observation at time $t$ , $o_t$, is given as $[W_c W_{\theta}]^{-1} s_t$.</p> + </li> + <li> + <p>Action at time $t$ is given as $a_t = K o_{t}$ where $K$ is the policy matrix.</p> + </li> + <li> + <p>Dimensionality:</p> + + <ul> + <li>state $s$: $d_{state}$ 100</li> + <li>$f(s)$: $d_{state}$ 100</li> + <li>$g_{\theta}(s)$: $d_{noise}$ 100</li> + <li>observation $o$: $d_{state}$ + $d_{noise}$ 1100</li> + </ul> + </li> + <li> + <p>In case of training on just one environment, multiple solutions exist, and overfitting happens.</p> + </li> + <li> + <p>Increasing $d_{noise}$ increases the generalization gap.</p> + </li> + <li> + <p>Overparameterizing the network decreases the generalization gap and also reduces the norm of the policy.</p> + </li> +</ul> + +<h3 id="projected-gym-environments">Projected Gym Environments</h3> + +<ul> + <li> + <p>The base MDP is the Gym Environment.</p> + </li> + <li> + <p>$M_{\theta}$ is generated as before.</p> + </li> + <li> + <p>Increasing both width and depth for basic MLPs improves generalization.</p> + </li> + <li> + <p>Generalization also depends on the choice of activation function, residual layers, etc.</p> + </li> +</ul> + +<h3 id="deconvolutional-projections">Deconvolutional Projections</h3> + +<ul> + <li> + <p>In the Gym environment, the actual state is projected to a larger vector and reshaped into an 84x84 tensor (image).</p> + </li> + <li> + <p>The image from $f$ is concatenated with the image from $g$. This setup is referred to as the Gym-Deconv.</p> + </li> + <li> + <p>The relative order of performance between NatureCNN, IMPALA, and IMPALA-Large (on both CoinRun and Gym-Deconv) is the same as the order of the number of parameters they contain.</p> + </li> + <li> + <p>In an ablation, the policy is given access to only $g_{\theta}(s)$, which makes it impossible for the model to generalize. In this test of memorization capacity, implicit regularization seems to reduce the memorization effect.</p> + </li> +</ul> + +<h3 id="overparameterization-in-coinrun">Overparameterization in CoinRun</h3> + +<ul> + <li> + <p>The pixel space observation in CoinRun is downsized from 64x64 to 32x32 and flattened into a vector.</p> + </li> + <li> + <p>In CoinRun, the dynamics change per level, and the noisy “irrelevant” features change location across the 1D input, making this setup more challenging than the previous ones.</p> + </li> + <li> + <p>Overparameterization improves generalization in this scenario as well.</p> + </li> +</ul> + + + + + Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML + + 2020-01-16T00:00:00-05:00 + /site/2020/01/16/Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper investigated two possible reasons behind the usefulness of MAML algorithm:</p> + + <ul> + <li> + <p><strong>Rapid Learning</strong> - Does MAML learn features that are amenable for rapid learning?</p> + </li> + <li> + <p><strong>Feature Reuse</strong> - Does the MAML initialization provide high-quality features that are useful for the unseen tasks.</p> + </li> + </ul> + </li> + <li> + <p>This leads to a follow-up question: how much task-specific inner loop adaptation is needed.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1909.09157">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>In a standard few-shot learning setup, the different datasets have different classes. Hence, the top-most layer (or the head) of the learning model should be different for different tasks.</p> + </li> + <li> + <p>The subsequent discussion only applies to the body of the network (ie, network minus the head).</p> + </li> + <li> + <p><strong>Freezing Layer Representations</strong></p> + + <ul> + <li> + <p>In this setup, a subset (or all) of parameters are frozen (after MAML training) and are not adapted during the representation.</p> + </li> + <li> + <p>Even when the entire network is frozen, the performance drops only marginally.</p> + </li> + <li> + <p>This indicates that the representation learned by the meta-initialization is good enough to be useful on the test tasks (without requiring any adaptation step).</p> + </li> + <li> + <p>Note that the head of the network is still adapted during testing.</p> + </li> + </ul> + </li> + <li> + <p><strong>Representational Similarity</strong></p> + + <ul> + <li> + <p>In this setup, the paper reports the change in the latent representation (learned by the network) during the inner loop update with a fully trained model.</p> + </li> + <li> + <p>Canonical Correlation Analysis (CCA) and Central Kernel Alignment (CKA) metrics are used to measure the similarity between the representations.</p> + </li> + <li> + <p>The main finding is that the representations in the body of the network are very similar before and after the inner loop updates while the representations in the head of the network are very different.</p> + </li> + </ul> + </li> + <li> + <p>The above two observations indicate that feature reuse is the primary driving factor for the success of MAML.</p> + </li> + <li> + <p><strong>When does feature reuse happen</strong></p> + + <ul> + <li> + <p>The paper considers the model at different stages of training and compares the similarity in the representation (before and after the inner loop update).</p> + </li> + <li> + <p>Even early in training, the CCA similarity between the representations (before and after the inner loop update) is quite high. Similarly, freezing the layers (for the test time update), early in training, does not degrade the test time performance much. This hints that the feature reuse happens early in the learning process.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="the-anil-almost-no-inner-loop-algorithm">The ANIL (Almost No Inner Loop) Algorithm</h2> + +<ul> + <li> + <p>The empirical evidence suggests that the success of MAML lies in the feature reuse.</p> + </li> + <li> + <p>The authors build on this observation and propose a simplification of the MAML algorithm: ANIL or Almost No Inner Loop Algorithm</p> + </li> + <li> + <p>In this algorithm, the inner loop updates are applied only to the head of the network.</p> + </li> + <li> + <p>Despite being much more straightforward, the performance of ANIL is close to the performance of MAML for both few-shot image classification and RL tasks.</p> + </li> + <li> + <p>Removing most of the inner loop parameters speed up the computation by a factor of 1.7 (during training) and 4.1 (during inference).</p> + </li> +</ul> + +<h2 id="removing-the-inner-loop-update">Removing the Inner Loop Update</h2> + +<ul> + <li> + <p>Given that it is possible to remove most of the parameters from the inner loop update (without affecting the performance), the next step is to check if the inner loop update can be removed entirely.</p> + </li> + <li> + <p>This leads to the NIL (No Inner Loop) algorithm, which does not involve any inner loop adaptation steps.</p> + </li> +</ul> + +<h3 id="algorithm">Algorithm</h3> + +<ul> + <li> + <p>A few-shot learning model is trained - either with MAML or ANIL.</p> + </li> + <li> + <p>During testing, the head is removed.</p> + </li> + <li> + <p>For each task, the K training examples are fed to the body to obtain class representations.</p> + </li> + <li> + <p>For a given test data point, the representation of the data point is compared with the different class representations to obtain the target class.</p> + </li> + <li> + <p>The NIL algorithm performs similar to the MAML and the ANIL algorithms for the few-shot image classification task.</p> + </li> + <li> + <p>Note that it is still important to use MAML/ANIL during training, even though the learned head is not used during evaluation.</p> + </li> +</ul> + +<h2 id="conclusion">Conclusion</h2> + +<ul> + <li>The paper discusses the different classes of meta-learning approaches. It concludes with the observation that feature reuse (and not rapid adaptation) seems to be the common model of operation for both optimization-based meta-learning (e.g., MAML) and model-based meta-learning.</li> +</ul> + + + + + Accurate, Large Minibatch SGD - Training ImageNet in 1 Hour + + 2020-01-09T00:00:00-05:00 + /site/2020/01/09/Accurate Large Minibatch SGD - Training ImageNet in 1 Hour + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Training models with large minibatches (using distributed synchronous SGD) can lead to optimization issues.</p> + </li> + <li> + <p>The paper presents techniques for training models with large batch size while matching the accuracy of small minibatch setups.</p> + </li> + <li> + <p>The paper focuses on the ImageNet dataset, but many of the proposed ideas are applicable broadly.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1706.02677">Link to the paper</a></p> + </li> +</ul> + +<h2 id="linear-scaling-rule">Linear Scaling Rule</h2> + +<ul> + <li> + <p>When the minibatch size increases by a factor of <em>k</em>, the learning rate should also be increased by a factor of <em>k</em> (while keeping all other hyperparameters like weight decay fixed).</p> + </li> + <li> + <p>Note that this is an empirical rule and is not expected to hold under all conditions.</p> + </li> + <li> + <p>One such condition is when the model is changing rapidly during the first few epochs. In this case, a warmup phase is introduced to stabilize the model.</p> + </li> + <li> + <p>The paper verifies that the scaling rule is applicable to batch sizes as large as 8K.</p> + </li> +</ul> + +<h2 id="warmup">Warmup</h2> + +<ul> + <li>The learning rate should be gradually ramped up from a small value to a large value to allow convergence.</li> +</ul> + +<h2 id="batch-normalization">Batch Normalization</h2> + +<ul> + <li> + <p>Batch normalization uses batch statistics to normalize the data. Hence, the loss corresponding to each data point (in the batch) is not independent. Thus, changing the batch size could change the underlying function being optimized.</p> + </li> + <li> + <p>In the distributed SGD setup, the per-GPU (or per-worker) batch size should be kept constant, and only one worker should compute the batch norm statistics.</p> + </li> +</ul> + +<h2 id="pitfalls-when-using-distributed-sgd">Pitfalls when using distributed SGD</h2> + +<ul> + <li> + <p>When using weight decay, scaling the cross-entropy loss is not the same as scaling the learning rate.</p> + </li> + <li> + <p>When using momentum, changing the learning rate could require “momentum correction.”</p> + </li> + <li> + <p>Ensure that the per-worker loss is normalized by the size of the total minibatch and not just by the size of minibatch that each worker sees.</p> + </li> + <li> + <p>For each epoch, uses a single random shuffling of the training data (before dividing between the workers).</p> + </li> +</ul> + +<h2 id="communication">Communication</h2> + +<ul> + <li> + <p>The paper describes various techniques to speed up the training pipeline by reducing the communication overhead between nodes. (Each node can have one or more GPUs).</p> + </li> + <li> + <p>First, a node sums the gradient from all the GPUs it has.</p> + </li> + <li> + <p>The gradients are shared and summed across all the nodes.</p> + </li> + <li> + <p>Each node broadcasts the resulting gradient to all the GPUs it has.</p> + </li> + <li> + <p>Gradient Aggregation is performed in parallel with the backpropagation operator. While aggregating the gradient for one layer, the system starts computing the gradient of the next layer.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Using these approaches, a Resnet50 model can be trained on the ImageNet dataset in an hour (using 256 workers).</p> + </li> + <li> + <p>When an appropriate warmup strategy is used, the training and the validation curves (for the large batch size setup) matches the corresponding curves for the small batch size setup.</p> + </li> + <li> + <p>The best performing warmup strategy is the one where training starts at a learning rate of 0.1 and linearly increases to 3.2 over five epochs.</p> + </li> + <li> + <p>The paper shows that the results are not specific to the Resnet50 model (experiments with Resnet101 model) or the use case (experiments with object detection and instance segmentation using Mask R-CNN).</p> + </li> + <li> + <p>Along with providing the empirical validation of the proposed ideas, the paper describes all the hyperparameters. It also includes the training and validation curves with the different configurations which enable others to replicate and build on this work.</p> + </li> +</ul> + + + + + Superposition of many models into one + + 2020-01-02T00:00:00-05:00 + /site/2020/01/02/Superposition of many models into one + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a technique (called Parameter Superposition or PSP) for training and storing multiple models within a single set (or instance) of parameters.</p> + </li> + <li> + <p>The different models exist in “superposition” and can be retrieved dynamically given task-specific context information.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1902.05522">Link to the paper</a>.</p> + </li> +</ul> + +<h2 id="parameter-substitution">Parameter Substitution</h2> + +<ul> + <li> + <p>Consider a task with input \(x \in R^N\) and parameter \(W$ \in R^{M \times N}\) where the output (target or features) are given as \(y=Wx\).</p> + </li> + <li> + <p>Now consider \(K\) such tasks with parameters \(W_1, W_2, \cdots W_K\).</p> + </li> + <li> + <p>If each \(W_k\) requires only a small subspace in \(R^N\), then a linear transformation \(C_k^{-1}\) can be used such that each \(W_kC_k^{-1}\) occupies a mutually orthogonal subspace in \(R^N\).</p> + </li> + <li> + <p>The set of parameters \(W_1, \cdots W_K\) can be represented by a single \(W^{M \times N}\) by adding \(W_kC_k^{-1}\).</p> + </li> + <li> + <p>The parameter corresponding to the \(k^{th}\) task can be retrived (with some noise) using the context \(C_k\) as \(W^{~}_k = WC_k\)</p> + </li> + <li> + <p>Even though the retrieval is noisy, the effect of noise is limited for the context vectors used in the paper.</p> + </li> + <li> + <p>Finally, \(\widetilde(y) = \widetilde(W)_{k}x = (WC_{k})x = W(C_{k}x)\)</p> + </li> + <li> + <p>Instead of learning \(K\) separate models, only \(K\) context vectors (along with 1 superimposed model) needs to be learned.</p> + </li> + <li> + <p>The key assumption is that \(N\) (in \(x \in R^N)\) is large enough such that each \(W_k\) requires only a small subspace of \(R^N\).</p> + </li> + <li> + <p>Since images and speech signals tend to occupy a low dimensional manifold, this requirement can be satisfied by over-parameterizing x.</p> + </li> +</ul> + +<h2 id="choice-of-context-c">Choice of Context C</h2> + +<ul> + <li> + <p>Rotational Superposition (pspRotation)</p> + + <ul> + <li> + <p>Sample rotations uniformly from the orthogonal group \(O(M)\).</p> + </li> + <li> + <p>Downside is that if \(M \sim N\), it requires storing as many parameters as learning \(K\) individual models (since \(C\) is of the size of ##M \times M$$).</p> + </li> + </ul> + </li> + <li> + <p>Complex Superposition (pspComplex)</p> + + <ul> + <li> + <p>The design of rotational superposition can be improved by choosing \(C_k\) to be a diagonal matrix ie \(C_k = diag(c_k)\) where \(c_k\) is a vector of size \(M\).</p> + </li> + <li> + <p>Choosing \(c_k\) to be a vector of complex numbers (of the form \(c_{k}^{j} = e^{i\phi_{j}(k)}\) where \(\phi_{j}(k)\) or the phase is sampled uniformly from \([-\pi, \pi]\)) leads to \(C_k\) being a digonal orthogonal matrix.</p> + </li> + </ul> + </li> + <li> + <p>Powers of a single context</p> + + <ul> + <li>The memory footprint can be further reduced by choosing the context vectors to be integral powers of the first context vector.</li> + </ul> + </li> + <li> + <p>Binary Superposition (pspBinary)</p> + + <ul> + <li>This is a special case of complex superposition where the context vectors are binary.</li> + </ul> + </li> +</ul> + +<h2 id="neural-network-superposition">Neural Network Superposition</h2> + +<ul> + <li> + <p>The parameter superposition principle can be applied to all the linear layers of a network.</p> + </li> + <li> + <p>For the convolutional layers, it makes more sense to apply superposition to the convolutional kernel and not to the input image (as the dimensionality of convolutional parameters is smaller than that of inputs).</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>For all the experiments, the baseline is a standard supervised learning setup, unless mentioned otherwise.</p> + </li> + <li> + <p>The metric is the performance on the previous tasks when the model has been trained on the newer tasks.</p> + </li> + <li> + <p>Input Interference</p> + + <ul> + <li> + <p>The input distribution changes over time.</p> + </li> + <li> + <p>Permuted MNIST dataset is used where each permutation of the pixels corresponds to a new task.</p> + </li> + <li> + <p>A new task is sampled every 1000 mini-batches.</p> + </li> + <li> + <p>As the network size increases, the performance of Parameter Superposition (psp) outperforms the baseline significantly.</p> + </li> + <li> + <p>pspRotation &gt; pspComplex &gt; pspBinary in terms of both performance and the number of additional parameters required for each new task.</p> + </li> + <li> + <p>Given that pspBinary is the easiest to implement while being comparable to more sophisticated baselines like Elastic Weight Consolidation (EWC) and Synaptic Intelligence, the paper presents most of the results with the pspBinary model.</p> + </li> + </ul> + </li> + <li> + <p>Continous Domain Shift</p> + + <ul> + <li> + <p>Rotating-MNIST and Rotating-FashionMNIST tasks are proposed to simulate continuous domain shift.</p> + </li> + <li> + <p>In these tasks, the input images are rotated in-plane by a small angle such that the rotation is complete after 1000 steps.</p> + </li> + <li> + <p>A new context is assigned after 100 steps as per step changes in the angle would be very small.</p> + </li> + <li> + <p>The 10 context vectors used in the first 1000 steps are reused for the subsequent steps.</p> + </li> + </ul> + </li> + <li> + <p>Randomly changing the context vector</p> + + <ul> + <li> + <p>The paper considers an ablation where the context vector is randomly changed at every step (of the 1000 step cycle). This required the superposition model to store 1000 models.</p> + </li> + <li> + <p>This approach is better than the supervised learning baseline but not as good as the proposed psp* models.</p> + </li> + </ul> + </li> + <li> + <p>Output Interference</p> + + <ul> + <li> + <p>This is the setup where the model transitions from one classification task to another.</p> + </li> + <li> + <p>Incremental CIFAR dataset is used with Resnet18 as the base model.</p> + </li> + <li> + <p>Baseline is a standard supervised learning model where a new classification head is used for each task (since the classes have a different meaning in each dataset). The model component before the classification layer is shared across the tasks.</p> + </li> + <li> + <p>Even though the labels are different across the datasets, the pspBinary model, trained with a single output layer, outperforms the multi-headed baseline.</p> + </li> + </ul> + </li> +</ul> + + + + + Towards a Unified Theory of State Abstraction for MDPs + + 2019-12-26T00:00:00-05:00 + /site/2019/12/26/Towards a Unified Theory of State Abstraction for MDPs + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper studies five different techniques for stat abstraction in MDPs (Markov Decision Processes) and evaluates their usefulness for planning and learning.</p> + </li> + <li> + <p>The general idea behind abstraction is to map the actual (or observed) state to an abstract state that should be more amenable for learning.</p> + </li> + <li> + <p>It can be thought of as a mapping from one representation to another representation while preserving some useful properties.</p> + </li> + <li> + <p><a href="https://pdfs.semanticscholar.org/ca9a/2d326b9de48c095a6cb5912e1990d2c5ab46.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="general-definition">General Definition</h2> + +<ul> + <li> + <p>Consider a MDP \(M = &lt;S, A, P, R, \gamma&gt;\) where \(S\) is the finite set of states, \(A\) is finite set of actions, \(P\) is the transition function, \(R\) is the bounded reward function and \(\gamma\) is the discount factor.</p> + </li> + <li> + <p>The abstract version of the MDP is \(\widetilde{M} = &lt;\widetilde{S}, A, \widetilde{P}, \widetilde{R}, \gamma&gt;\) where \(\widetilde{S}\) is the finite set if abstract states, \(\widetilde{P}\) is the transition function in the abstract state space and \(\widetilde{R}\) is the bounded reward function in the abstract reward space.</p> + </li> + <li> + <p>Abstraction function \(\phi\) is a function that maps a given state \(s\) to its abstract counterpart \(\widetilde{s}\).</p> + </li> + <li> + <p>The inverse image \(\phi^{-1}(\widetilde{s})\) is the set of ground states that map to the \(\widetilde{s}\) under the abstraction function \(\phi\).</p> + </li> + <li> + <p>A wieghing functioon \(w(s)\) is used to measure how much does a state \(s\) contribute to the abstract state \(\phi(s)\).</p> + </li> +</ul> + +<h2 id="topology-of-abstraction-space">Topology of Abstraction Space</h2> + +<ul> + <li> + <p>Given two abstraction functions \(\phi_{1}\) and \(\phi_{2}\), \(\phi_{1}\) is said to be <em>finer</em> than \(\phi_{2}\) iff for any states \(s_{1}, s_{2}\) if \(\phi_{1}(s_{1}) = \phi_{1}(s_{2})\) then \(\phi_{2}(s_{1}) = \phi_{2}(s_{2})\).</p> + </li> + <li> + <p>This <em>finer</em> relation is reflex, antisymmetric, transitive and partially ordered.</p> + </li> +</ul> + +<h2 id="five-types-of-abstraction">Five Types of Abstraction</h2> + +<ul> + <li> + <p>While many abstractions are possible, not all abstractions are equally important.</p> + </li> + <li> + <p>Model-irrelevance abstraction \(\phi_{model}\):</p> + + <ul> + <li> + <p>If two states $s_{1}$ and $s_{2}$ have the same abstracted state, then their one-step model is preserved.</p> + </li> + <li> + <p>Consider any action \(a\) and any abstract state \(\widetilde{s}\), if \(\phi_{model}(s_{1} = \phi_{model}(s_{2})\) then \(R(s_1, a) = R(s_2, a)\) and \(\sum_{s' \in \phi_{model}^{-1}\widetilde(s)}P_{s_1, s'}^{a} = \sum_{s' \in \phi_{model}^{-1}\widetilde(s)}P_{s_2, s'}^{a}\).</p> + </li> + </ul> + </li> + <li> + <p>\(Q^{\pi}\)-irrelevance abstraction:</p> + + <ul> + <li> + <p>It preserves the state-action value finction for all the states.</p> + </li> + <li> + <p>\(\phi_{Q^{\pi}}(s_1) = \phi_{Q^{\pi}}(s_2)\) implies \(Q^{\pi}(s_1, a) = Q^{\pi}(s_1, a)\).</p> + </li> + </ul> + </li> + <li> + <p>\(Q^{*}\)-irrelevance abstraction:</p> + + <ul> + <li>It preserves the optimal state-action value function.</li> + </ul> + </li> + <li> + <p>\(a^{*}\)-irrelevance abstraction:</p> + + <ul> + <li>It preserves the optimal action and its value function.</li> + </ul> + </li> + <li> + <p>\(\phi_{\pi^{*}}\)-irrelevance abstraction:</p> + + <ul> + <li>It preserves the optimal action.</li> + </ul> + </li> + <li> + <p>In terms of <em>fineness</em>, \(\phi_0 \geq \phi_{model} \geq \phi_{Q^{\pi}} \geq \phi_{Q^*} \geq \phi_{a^*} \geq \phi_{\pi^*}\). Here \(\phi_0\) is the identity mapping ie \(\phi_0(s) = s\)</p> + </li> + <li> + <p>If a property applies to any abstraction, it also applies to all the finer abstractions.</p> + </li> +</ul> + +<h2 id="key-theorems">Key Theorems</h2> + +<ul> + <li> + <p>As we go from finer to coarser abstractions, the information loss increases (ie fewer components can be recovered) while the state-space reduces (ie the efficiency of solving the problem increases). This leads to a tradeoff when selecting abstractions.</p> + </li> + <li> + <p>For example, with abstractions \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}, \phi_{a^*}\), the optimal abstract policy \(\widetilde(\pi)^*\) is optimal in the ground MDP.</p> + </li> + <li> + <p>Similarly, if each state-action pair is visited infinitely often and the step-size decays properly, Q-learning with \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}\) converges to the optimal state-action value functions in the MDP. More conditions are needed for convergence in the case of the remaining two abstractions.</p> + </li> + <li> + <p>For \(\phi_{model}, \phi_{Q^{\pi}}, \phi_{Q^*}, \phi_{a^*}\), the model built with the experience converges to the true abstract model with infinite experience if the weighing function \(w(s)\) is fixed.</p> + </li> +</ul> + + + + + + ALBERT - A Lite BERT for Self-supervised Learning of Language Representations + + 2019-12-19T00:00:00-05:00 + /site/2019/12/19/ALBERT - A Lite BERT for Self-supervised Learning of Language Representations + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes parameter-reduction techniques to lower the memory consumption (and improve training speed) of BERT.</p> + </li> + <li> + <p>It also proposes to use a self-supervised loss (based on inter-sentence coherence) and argues that this loss is better than the NSP loss used by BERT.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1909.11942">Link to the paper</a></p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>ALBERT architecture is similar to that of BERT with three major differences.</p> + </li> + <li> + <p>Factorized Embedding Parameterization</p> + + <ul> + <li> + <p>In BERT and followup works, the embedding size was tied to the size of the context vector.</p> + </li> + <li> + <p>Since context vector is expected to encoder the entire context, it needs to have a large dimensionality.</p> + </li> + <li> + <p>One consequence of this choice is that even the embedding layer (which encodes the representation for each token) has a large size. This increases the overall memory footprint of the model.</p> + </li> + <li> + <p>The paper proposed to factorize the embedding parameters into two smaller matrics.</p> + </li> + <li> + <p>The embedding layer learns a low dimensional representation of the tokens and this representation is projected into a high dimensional space.</p> + </li> + </ul> + </li> + <li> + <p>Cross-layer parameter sharing</p> + + <ul> + <li>ALBERT shares all the parameters across the layers.</li> + </ul> + </li> + <li> + <p>Inter-sentence coherence loss</p> + + <ul> + <li> + <p>BERT uses two losses - Masked Language Modeling loss (MLM) and Next Sentence Prediction (NSP).</p> + </li> + <li> + <p>In the NSP task, the model is provided a pair of sentences and it has to predict if the two sentences appear consecutively in the same document or not. Negative samples are created by sampling sentences from different documents.</p> + </li> + <li> + <p>The paper argues that NSP is not effective as a loss function as it merges topic prediction and coherence prediction into one task (as the two sentences come from different documents). The topic prediction is an easier task as compared to coherence prediction.</p> + </li> + <li> + <p>Hence the paper proposes to use the Sentence Order Prediction task where the model has to predict which of the two sentences comes first in a document. The negative samples are created by simply swapping the order in the positive samples. Hence both the sentences come from the same document and topic prediction alone can not be used to solve the task.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Different variants (in terms of size) of ALBERT and BERT models are compared (eg ALBERT, ALBERT-x, BERT-x, etc).</p> + </li> + <li> + <p>In general, ALBERT models have many-times fewer parameters as compared to the BERT models.</p> + </li> + <li> + <p>Datasets - BookCorpus, English Wikipedia.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>ALBERT-xxlarge significantly outperforms the BERT-large model even though it has around 70% parameters as the BERT-large model.</p> + </li> + <li> + <p>BERT-xlarge performs worse than BERT-base hinting that it is difficult to train such large models.</p> + </li> + <li> + <p>ALBERT models also have better data throughput as compared to BERT models.</p> + </li> + <li> + <p>For the ALBERT models, an embedding size of 128 performs the best.</p> + </li> + <li> + <p>As the hidden dimension is increased, the model obtains better performance, but with diminishing returns.</p> + </li> + <li> + <p>Very wide ALBERT models (say with a context size of 1024) do not benefit much from depth.</p> + </li> + <li> + <p>Using additional training data boosts the performance for most of the downstream tasks.</p> + </li> + <li> + <p>The paper empirically shows that using dropout could hurt the performance of the ALBERT models. This observation may not hold for BERT as it does not share parameters across layers and hence may need regularization via dropout.</p> + </li> + <li> + <p>ALBERT also improves the state of the art performance on GLUE, SQuAD and RACE benchmarks, for both single-model and ensemble setup.</p> + </li> +</ul> + + + + + + Everything Happens for a Reason - Discovering the Purpose of Actions in Procedural Text + + 2019-12-12T00:00:00-05:00 + /site/2019/12/12/Everything Happens for a Reason - Discovering the Purpose of Actions in Procedural Text + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Procedural text comprehension tasks focus on modeling the effect of actions and predicting what happens next.</p> + </li> + <li> + <p>But they do not consider <em>why</em> some actions need to happen before other actions.</p> + </li> + <li> + <p>The paper proposes a new model called XPAD (eXPlainable Action Dependency) that considers the <em>purpose</em> of actions while predicting their effect.</p> + </li> + <li> + <p>The model favors <em>effects</em> that:</p> + + <ul> + <li> + <p>explain more of actions in the text.</p> + </li> + <li> + <p>are more plausible given the context.</p> + </li> + </ul> + </li> + <li> + <p>An existing procedural text benchmark dataset (Propara) is expanded by adding the task of explaining actions by predicting their dependencies.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1909.04745">Link to the paper</a></p> + </li> + <li> + <p><a href="http://data.allenai.org/propara/">Link to the dataset</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Input</p> + + <ul> + <li> + <p>Procedural (chronologically ordered text) sequence of <em>T</em> sentences.</p> + </li> + <li> + <p>List of <em>N</em> participant entities, whose state changes at some step.</p> + </li> + </ul> + </li> + <li> + <p>Output</p> + + <ul> + <li> + <p>State change matrix $\pi(T \times N)$ with four possible states - move, create destroy, none.</p> + </li> + <li> + <p>This matrix tracks how property changes after each step.</p> + </li> + </ul> + </li> + <li> + <p>Dependency Explanation Graph</p> + + <ul> + <li> + <p>Identify what steps are necessary to execute a given step (say <em>s<sub>i</sub></em>) and represent this dependency in the form of a dependency explanation graph <em>G = &lt;S, E&gt;</em>.</p> + </li> + <li> + <p>In this graph, each node is a step and the direction of edge describes the order of dependency.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="dependency-graph-dataset">Dependency Graph Dataset</h2> + +<ul> + <li> + <p><a href="https://arxiv.org/abs/1805.06975">Propara dataset</a> is expanded to extract the dependency graph using both heuristic and automated methods.</p> + </li> + <li> + <p>The automated method is based on the coherence assumption that if step <em>s<sub>j</sub></em> changes state of entity <em>e<sub>k</sub></em> then <em>s<sub>j</sub></em> is a precondition for the first subsequent step that changes the state of <em>e<sub>k</sub></em>.</p> + </li> +</ul> + +<h2 id="xpad-model">XPAD Model</h2> + +<ul> + <li> + <p>The model is based on the ProStruct system and uses an encoder-decoder based architecture.</p> + </li> + <li> + <p>Encoder</p> + + <ul> + <li> + <p>Input: Sentence <em>s<sub>t</sub></em> and entity <em>e<sub>j</sub></em>.</p> + </li> + <li> + <p>Sentence is encoded using the GloVe vectors and a BiLSTM model and the entity is encoded as an indicator variable.</p> + </li> + <li> + <p>The combined representation is denoted as <em>c<sub>tj</sub></em>.</p> + </li> + <li> + <p>This representation is passed through an MLP to generate <em>k</em> logits that encode the probability of each entity <em>j</em> undergoing a state change at step <em>t</em>.</p> + </li> + </ul> + </li> + <li> + <p>Decoder</p> + + <ul> + <li> + <p>Beam search is performed to decode the encoder representation into the state change matrix and dependency graph using a score function that ensures global consistency.</p> + </li> + <li> + <p>Score function has two components:</p> + + <ul> + <li> + <p>State change score - depends on the likelihood that the selected state changes at step <em>t</em> given the text and state change history from steps <em>s<sub>1</sub></em> to <em>s<sub>t-1</sub></em>.</p> + </li> + <li> + <p>Dependency graph score</p> + + <ul> + <li> + <p>This is based on the connectivity and likelihood of the resulting dependency explanation graph.</p> + </li> + <li> + <p>This score is used to bias the graph search towards:</p> + + <ul> + <li> + <p>predictions that have an identifiable purpose ie checking if a particular state change prediction leads to a connection in the dependency explanation graph.</p> + </li> + <li> + <p>graphs that are more likely according to the background knowledge to distinguish likely dependency links from the unlikely ones.</p> + </li> + </ul> + </li> + </ul> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>During training, XPAD has access to the correct path (in the search space) and learns to minimize the joint loss corresponding to predicting the state change and the dependency explanation graph.</p> + </li> + <li> + <p>During testing, XPAD performs beam search to predict the most likely state change and dependency explanation graph.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Tasks:</p> + + <ul> + <li> + <p>State change prediction</p> + </li> + <li> + <p>Dependency explanation prediction</p> + </li> + </ul> + </li> + <li> + <p>Baselines:</p> + + <ul> + <li> + <p><a href="https://arxiv.org/abs/1612.03969">Recurrent Entity Networks</a></p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1606.04582">Query-Reduction Networks</a></p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1805.06975">ProLocal and ProGlobal</a></p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1808.10012">ProStruct</a></p> + </li> + </ul> + </li> + <li> + <p>XPAD significantly outperforms all the baseline models on the dependency explanation task.</p> + </li> + <li> + <p>Improvements on the state change prediction task are less significant.</p> + </li> + <li> + <p>Removing dependency graph scores from XPAD leads to a drop in the F1 score.</p> + </li> + <li> + <p>The paper provides an elaborate discussion on the different types of errors that the XPAD system makes.</p> + </li> +</ul> + + + + + Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model + + 2019-12-05T00:00:00-05:00 + /site/2019/12/05/Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents the MuZero algorithm that performs planning with a learned model.</p> + </li> + <li> + <p>The algorithm achieves state of the art results on Atari suite (where generally model-free approaches perform the best) and on planning-oriented games like Chess and Go (where generall planning approaches perform the best).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1911.08265">Link to the paper</a></p> + </li> +</ul> + +<h2 id="relation-to-standard-model-based-approaches">Relation to standard Model-Based Approaches</h2> + +<ul> + <li> + <p>Model-based approaches generally focus on reconstructing the true environment state or the sequence of full observations.</p> + </li> + <li> + <p>MuZero focuses on predicting only those aspects that are most relevant for planning - policy, value functions, and rewards.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The model consists of three components: (representation) encoder, dynamics function, and the prediction network.</p> + </li> + <li> + <p>The learning agent has two kinds of interactions - real interactions (ie the actions that are actually executed in the real environment) and hypothetical or imaginary actions (ie the actions that are executed in the learned model or the dynamics function).</p> + </li> + <li> + <p>At any timestep <em>t</em>, the past observations <em>o<sub>1</sub></em>, … <em>o<sub>t</sub></em> are encoded into the state <em>s<sub>t</sub></em> using the encoder.</p> + </li> + <li> + <p>Now the model takes hypothetical actions for the next <em>K</em> timesteps by unrolling the model for <em>K</em> steps.</p> + </li> + <li> + <p>For each timestep <em>k = 1, …, K</em>, the dynamics model predicts the immediate reward <em>r<sub>k</sub></em> and a new hidden state <em>h<sub>k</sub></em> using the previous hidden state <em>h<sub>k-1</sub></em> and action <em>a<sub>k</sub></em>.</p> + </li> + <li> + <p>At the same time, the policy <em>p<sup>k</sup></em> and the value function <em>v<sup>k</sup></em> are computed using the prediction network.</p> + </li> + <li> + <p>The initial hidden state <em>h<sub>0</sub></em> is initialized using the state <em>s<sub>t</sub></em></p> + </li> + <li> + <p>Any MDP Planning algorithm can be used to search for optimal policy and value function given the state transitions and the rewards induced by the dynamics function.</p> + </li> + <li> + <p>Specifically, the MCTS (Monte Carlo Tree Search) algorithm is used and the action <em>a<sub>t+1</sub></em> (ie the action that is executed in the actual environment) is selected from the policy outputted by MCTS.</p> + </li> +</ul> + +<h2 id="collecting-data-for-the-replay-buffer">Collecting Data for the Replay Buffer</h2> + +<ul> + <li> + <p>At each timestep <em>t</em>, the MCTS algorithm is executed to choose the next action (which will be executed in the real environment).</p> + </li> + <li> + <p>The resulting next observation <em>o<sub>t+1</sub></em> and reward <em>r<sub>t+1</sub></em> are stored and the trajectory is written to the replay buffer (at the end of the episode).</p> + </li> +</ul> + +<h2 id="objective">Objective</h2> + +<ul> + <li> + <p>For every hypothetical step <em>k</em>, match the predicted policy, value, and reward to the actual target values.</p> + </li> + <li> + <p>The target policy is generated by the MCTS algorithm.</p> + </li> + <li> + <p>The target value function and reward are generated by actually playing the game (or the MDP).</p> + </li> +</ul> + +<h2 id="relation-to-alphazero">Relation to AlphaZero</h2> + +<ul> + <li> + <p>MuZero leverages the search-based policy iteration from AlphaZero.</p> + </li> + <li> + <p>It extends AlphaZero to setups with a single agent (where self-play is not possible) and setups with a non-zero reward at the intermediate time steps.</p> + </li> + <li> + <p>The encoder and the predictions functions are similar to ones used by AlphZero.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p><em>K</em> is set to 5.</p> + </li> + <li> + <p>Environments: 57 games in Atari along with Chess, Go and Shogi</p> + </li> + <li> + <p>MuZero achieves the same level of performance as AlphaZero for Chess and Shogi. In Go, MuZero slightly outperforms AlphaZero despite doing fewer computations per node in the search tree.</p> + </li> + <li> + <p>In Atari, MuZero achieves a new state-of-the-art compared to both model-based and model-free approaches.</p> + </li> + <li> + <p>The paper considers a variant called MuZero Reanalyze that reanalyzes old trajectories by re-running the MCTS algorithm with the updated network parameter. The motivation is to have a better sample complexity.</p> + </li> + <li> + <p>MuZero performs well even when using a single simulation of MCTS (during inference).</p> + </li> + <li> + <p>During training, using more simulations of MCTS helps to achieve better performance through even just 6 simulations per move is sufficient to learn a good model for Ms. Pacman.</p> + </li> +</ul> + + + + + Contrastive Learning of Structured World Models + + 2019-11-28T00:00:00-05:00 + /site/2019/11/28/Contrastive Learning of Structured World Models + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces Contrastively-trained Structured World Models (C-SWMs).</p> + </li> + <li> + <p>These models use a contrastive approach for learning representations in environments with compositional structure.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1911.12247">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/tkipf/c-swm">Link to the code</a>.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The training data is in the form of an experience buffer \(B = \{(s_t, a_t, s_{t+1})\}_{t=1}^T\) of state transition tuples.</p> + </li> + <li> + <p>The goal is to learn:</p> + + <ul> + <li> + <p>an encoder \(E\) that maps the observed states $s_t$ (pixel state observations) to latent state $z_t$.</p> + </li> + <li> + <p>a transition model \(T\) that predicts the dynamics in the hidden state.</p> + </li> + </ul> + </li> + <li> + <p>The model defines the enegry of a tuple \((s_t, a_t, s_{t+1})\) as \(H = d(z_t + T(z_t, a_t), z_{t+1})\).</p> + </li> + <li> + <p>The model has an inductive bias for modeling the effect of action as translation in the abstract state space.</p> + </li> + <li> + <p>An extra hinge-loss term is added: \(max(0, \gamma - d(z^{~}_{t}, z_{t+1}))\) where \(z^{~}_{t} = E(s^{~}_{t})\) is a corrputed latent state corresponding to a randomly sampled state \(s^{~}_{t}\).</p> + </li> +</ul> + +<h2 id="object-oriented-state-factorization">Object-Oriented State Factorization</h2> + +<ul> + <li> + <p>The goal is to learn object-oriented representations where each state embedding is structured as a set of objects.</p> + </li> + <li> + <p>Assuming the number of object slots to be \(K\), the latent space, and the action space can be factored into \(K\) independent latent spaces (\(Z_1 \times ... \times Z_K\)) and action spaces (\(A_1 \times ... \times A_k\)) respectively.</p> + </li> + <li> + <p>There are <em>K</em> CNN-based object extractors and an MLP-based object encoder.</p> + </li> + <li> + <p>The actions are represented as one-hot vectors.</p> + </li> + <li> + <p>A fully connected graph is induced over <em>K</em> objects (representations) and the transition function is modeled as a Graph Neural Network (GNN) over this graph.</p> + </li> + <li> + <p>The transition function produces the change in the latent state representation of each object.</p> + </li> + <li> + <p>The factorization can be taken into account in the loss function by summing over the loss corresponding to each object.</p> + </li> +</ul> + +<h2 id="environments">Environments</h2> + +<ul> + <li> + <p>Grid World Environments - 2D shapes, 3D blocks</p> + </li> + <li> + <p>Atari games - Pong and Space Invaders</p> + </li> + <li> + <p>3-body physics simulation</p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Random policy is used to collect the training data.</p> + </li> + <li> + <p>Evaluation is performed in the latent space (no reconstruction in the pixel space) using ranking metrics. The observations (to compare against) are randomly sampled from the buffer.</p> + </li> + <li> + <p>Baselines - auto-encoder based World Models and <a href="https://arxiv.org/abs/1905.11169">Physics as Inverse Graphics model</a>.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>In the grid-world environments, C-SWM models the latent dynamics almost perfectly.</p> + </li> + <li> + <p>Removing either the state factorization or the GNN transition model hurts the performance.</p> + </li> + <li> + <p>C-SWM performs well on Atari as well but the results tend to have high variance.</p> + </li> + <li> + <p>The optimal values of $K$ should be obtained by hyperparameter tuning.</p> + </li> + <li> + <p>For the 3-body physics tasks, both the baselines and proposed models work quite well.</p> + </li> + <li> + <p>Interestingly, the paper has a section on limitations:</p> + + <ul> + <li> + <p>The object extractor module can not disambiguate between multiple instances of the same object (in a scene).</p> + </li> + <li> + <p>The current formulation of C-SWM can only be used with deterministic environments.</p> + </li> + </ul> + </li> +</ul> + + + + + Gossip based Actor-Learner Architectures for Deep RL + + 2019-09-12T00:00:00-04:00 + /site/2019/09/12/Gossip based Actor-Learner Architectures for Deep RL + <ul> + <li> + <p><a href="https://arxiv.org/abs/1906.04585">Link to the paper</a></p> + </li> + <li> + <p>The paper considers the task of training an RL system by sampling data from multiple simulators (over parallel devices).</p> + </li> + <li> + <p>The setup is that of distributed RL setting with <em>n</em> agents or actor-learners (composed of a single learner and several actors). These agents are trying to maximize a common value function.</p> + </li> + <li> + <p>One (existing) approach is to perform on-policy updates with a shared policy. The policy could be updated in synchronous (does not scale well) or asynchronous manner (can be unstable due to stale gradients).</p> + </li> + <li> + <p>Off policy approaches allow for better computational efficiency but can be unstable during training.</p> + </li> + <li> + <p>The paper proposed Gossip based Actor-Learner Architecture (GALA) which uses asynchronous communication (gossip) between the <em>n</em> agents to improve the training of Deep RL models.</p> + </li> + <li> + <p>These agents are expected to converge to the same policy.</p> + </li> + <li> + <p>During training, the different agents are not required to share the same policy and it is sufficient that the agent’s policies remain $\epsilon$-close to each other. This relaxation allows the policies to be trained asynchronously.</p> + </li> + <li> + <p>GALA approach is combined with A2C agents resulting in GALA-A2C agents. They have better computational efficiency and scalability (as compared to A2C) and similar in performance to A3C and Impala.</p> + </li> + <li> + <p>Training alternates between one local policy-gradient (and TD update) and asynchronous gossip between agents.</p> + </li> + <li> + <p>During the gossip step, the agents send their parameters to some of the other agents (referred to as the peers) and update their parameters based on the parameters received from the other agents (for which the given agent is a peer).</p> + </li> + <li> + <p>GALA agents are implemented using non-blocking communication so that they can operate asynchronously.</p> + </li> + <li> + <p>The paper includes the proof that the policies learned by the different agents are within $\epsilon$ distance of each other (ie all the policies lie within an $\epsilon$-distance ball) thus ensuring that the policies do not diverge much from each other.</p> + </li> + <li> + <p>Six games from the Ataru 2600 games suite are used for the experiments.</p> + </li> + <li> + <p>Baselines: A2C, A3C, Impala</p> + </li> + <li> + <p>GALA agents are configured in a directed ring graph topology.</p> + </li> + <li> + <p>With A2C, as the number of simulators increases, the number of convergent runs (runs with a threshold reward) decreases.</p> + </li> + <li> + <p>Using gossip algorithms increases or maintains the number of convergent runs. It also improves the performance, sample efficiency and compute efficiency of A2C across all the six games.</p> + </li> + <li> + <p>When compared to Impala and A3C, GALA-A2C generally outperforms (or performs as well as) those baselines.</p> + </li> + <li> + <p>Given that the learned policies remain within an $\epsilon$ ball, the agent’s gradients are less correlated as compared to the A2C agents.</p> + </li> +</ul> + + + + + How to train your MAML + + 2019-09-05T00:00:00-04:00 + /site/2019/09/05/How to train your MAML + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes MAML++ - a modification of MAML algorithm that stabilizes its training improves generalization performance and reduces the computational overhead.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1810.09502">Link to the paper</a></p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<h3 id="unstable-training">Unstable Training</h3> + +<ul> + <li> + <p>Training the outer loop requires unfolding the inner loop multiple times.</p> + </li> + <li> + <p>In absence of skip connections, the gradient is multiplied by the same parameter multiple times.</p> + </li> + <li> + <p>Large depth and absent skip connections could lead to exploding and vanishing gradients respectively.</p> + </li> + <li> + <p>The paper proposes to stabilize the gradient propagation by minimizing the target set loss computed by the base-network after every step towards a support set task.</p> + </li> + <li> + <p>It is important to anneal the contribution of earlier steps and increase the contribution of later steps over time.</p> + </li> +</ul> + +<h3 id="second-order-derivatives-are-expensive-to-compute">Second Order derivatives are expensive to compute</h3> + +<ul> + <li> + <p>While the first-order MAML is faster, the resulting model may not have as good a generalization error as the second-order MAML.</p> + </li> + <li> + <p>The paper proposes to use derivative order annealing where the first order gradients are used for the first 50 epochs and the network uses second-order gradients from thereon.</p> + </li> + <li> + <p>This derivative order annealing appears to be more stable than models that use second-order derivatives only.</p> + </li> +</ul> + +<h3 id="batch-normalization">Batch Normalization</h3> + +<ul> + <li> + <p>In MAML, the statistics of the current batch are used for normalization instead of accumulating the running statistics.</p> + </li> + <li> + <p>The paper proposes to collect the statistics per step which can increase the convergence speed, stability, and generalization performance.</p> + </li> + <li> + <p>In MAML, the batch normalization biases are not updated in the inner-loop which can adversely impact the performance.</p> + </li> + <li> + <p>The paper proposes to learn a set of biases (per step) within the inner loop update.</p> + </li> +</ul> + +<h3 id="fixed-learning-rate">Fixed Learning Rate</h3> + +<ul> + <li> + <p>MAML uses a single learning rate across all the steps and all the parameters. This means there is one single learning rate that needs to be hyperparameter to work well for all the layers and steps.</p> + </li> + <li> + <p>An alternate solution would be to learn a separate learning rate per parameter but this can be impractical as it doubles the number of parameters to be learned.</p> + </li> + <li> + <p>The paper proposes to learn a learning rate and direction for each layer in the network, for each step it takes in the inner loop.</p> + </li> + <li> + <p>The paper also proposed to anneal the learning rate of the outer loop (using cosine annealing) as it helps to achieve better generalization.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Using these modifications helps to outperform the MAML model on both Omniglot and MiniImagenet datasets.</p> + </li> + <li> + <p>The biggest benefit comes by learning the per-layer, per-step learning rates and by using the per-step batch normalization.</p> + </li> +</ul> + + + + + PHYRE - A New Benchmark for Physical Reasoning + + 2019-08-29T00:00:00-04:00 + /site/2019/08/29/PHYRE - A New Benchmark for Physical Reasoning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes the PHYRE (PHYsical REasoning) benchmark - consisting of classic mechanical puzzles in 2D physical environments - as a means to evaluate the physical reasoning ability of machine learning models.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1908.05656">Link to the paper</a></p> + </li> +</ul> + +<h2 id="environment">Environment</h2> + +<ul> + <li> + <p>2D world that obeys Newtonian mechanics.</p> + </li> + <li> + <p>Gravitational force + Friction.</p> + </li> + <li> + <p>Non-deformable objects that can be static (ie fixed) or dynamic (ie can move and are affected by collisions etc).</p> + </li> +</ul> + +<h2 id="task">Task</h2> + +<ul> + <li> + <p>The learning agent starts in some initial world state (ie configuration of objects).</p> + </li> + <li> + <p>Goal is described in the form of (<code class="language-plaintext highlighter-rouge">subject</code>, <code class="language-plaintext highlighter-rouge">relation</code>, <code class="language-plaintext highlighter-rouge">object</code>) where the agent’s task is to satisfy the <code class="language-plaintext highlighter-rouge">relation</code> between the <code class="language-plaintext highlighter-rouge">subject</code> and the <code class="language-plaintext highlighter-rouge">object</code>.</p> + </li> + <li> + <p>Currently, only the “touch” <code class="language-plaintext highlighter-rouge">relation</code> is supported.</p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>The learning agent has to take a single action - placing one or more new dynamic objects in the world.</p> + </li> + <li> + <p>A simulator is run on the new configuration (for a fixed amount of time) to check if the goal condition is satisfied.</p> + </li> + <li> + <p>At the end of the simulation, a binary reward and intermediate observations (collected as the simulator executes) are provided to the learning agent.</p> + </li> + <li> + <p>These observations are 256*256 grids where each grid cell can take 1 of the 7 values (denoting different types of objects).</p> + </li> + <li> + <p>Since only one relation supported currently, the color is sufficient to encode the goal.</p> + </li> +</ul> + +<h2 id="benchmark-tiers">Benchmark Tiers</h2> + +<ul> + <li> + <p>Two benchmark tiers are provided where each tier comprises of a combination of:</p> + + <ul> + <li> + <p>a predefined set of all the actions that the agent is allowed to perform.</p> + </li> + <li> + <p>set of tasks that can be solved by at least one action from the allowed action set.</p> + </li> + </ul> + </li> + <li> + <p><strong>PHYRE-B</strong> - The agent is allowed to place a single (ball of any radii) at any valid location.</p> + </li> + <li> + <p><strong>PHYRE-2B</strong> - The agent is allowed to place 2 balls at any valid pair of locations.</p> + </li> + <li> + <p>Each of the two tiers has 25 task templates where each template comprises of variants of a single task (same goal but different initial conditions).</p> + </li> +</ul> + +<h2 id="evaluation">Evaluation</h2> + +<ul> + <li> + <p>Two evaluation setups are considered:</p> + + <ul> + <li> + <p><strong>within-template</strong> where the agent is trained on some tasks in a template and evaluated on a set of held-out tasks from the same template.</p> + </li> + <li> + <p><strong>cross-template</strong> where the agent is evaluated on tasks from a different template.</p> + </li> + </ul> + </li> + <li> + <p>In the training phase, the model has access to the simulator (but not to the correct solution). So the model could learn an action-prediction model or forward dynamics model or both.</p> + </li> + <li> + <p>In the testing phase, the model can query the simulator only a few times. Each query provides it with the binary reward and the intermediate observations.</p> + </li> +</ul> + +<h2 id="performance-measure">Performance Measure</h2> + +<ul> + <li> + <p>The emphasis is on solving more tasks (in few queries) during the test phase.</p> + </li> + <li> + <p>This requirement is captured using a metric called AUCCESS.</p> + </li> + <li> + <p>In general, the tasks in PHYRE-2B are harder than tasks in PHYRE-B.</p> + </li> +</ul> + +<h2 id="baseline-agents">Baseline Agents</h2> + +<ul> + <li> + <p>Random Agent - Randomly samples actions</p> + </li> + <li> + <p>Non-parametric agent (MEM) - generates R actions at random and uses the simulator to check how many tasks can be solved using these R random actions. During testing, try the R actions in the decreasing order of the number of tasks they solve.</p> + </li> + <li> + <p>Non-parametric agent with online learning (MEM-O) - Variant of MEM where an online adaptation step is performed during test time (to update the rank of the actions).</p> + </li> + <li> + <p>Deep Q Networks with an action encoder, observation encoder and fusion model (combine action and observation representation).</p> + </li> + <li> + <p>DQN with online learning (DQN-0): Variant of DQN with online updates (during the test phase).</p> + </li> + <li> + <p>Contextual bandits.</p> + </li> + <li> + <p>Policy learning approaches like PPO and A2C.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>Both Contextual bandits and policy-based approaches show poor training stability.</p> + </li> + <li> + <p>The best agent, DQN-O, reaches AUCCESS of 56.2\% on PHYRE-B and 39.26\% on PHYRE-2B. In general, agents with online adaptation perform better.</p> + </li> + <li> + <p>The tasks are designed such that 100000 attempts are sufficient to solve 100\% of tasks in PHYRE-B and 95\% of tasks in PHYRE-2B.</p> + </li> + <li> + <p>Even though only two tiers are provided right now, the benchmark is readily extensible and new tasks can be added in the future.</p> + </li> +</ul> + + + + + Large Memory Layers with Product Keys + + 2019-08-22T00:00:00-04:00 + /site/2019/08/22/Large Memory Layers with Product Keys + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper proposes a structured key-value memory layer that: + <ul> + <li>Can scale to a very large size (and capacity).</li> + <li>Has very low computational overhead.</li> + <li>Supports exact search in the keyspace.</li> + <li>Can be easily integrated with neural networks.</li> + </ul> + </li> + <li><a href="https://arxiv.org/abs/1907.05242">Link to the paper</a></li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>The memory layer is composed of 3 components:</p> + + <ul> + <li> + <p><strong>Query Network</strong></p> + + <ul> + <li>Maps input to a latent space.</li> + <li>Can be implemented as a feed-forward network.</li> + <li>Adding batch-norm on top of the query network helps to spread out keys.</li> + </ul> + </li> + <li> + <p><strong>Key selection module</strong></p> + + <ul> + <li>Lets say there are a total of <em>K</em> keys of dimensionality <em>d<sub>q</sub></em> of which we want to select top <em>k</em> keys.</li> + <li>Partition the set of keys into two sets of <em>subkeys</em> (say <em>Q<sub>1</sub></em> and <em>Q<sub>2</sub></em>) where each subset has <em>K</em> keys of dimensionality <em>d_q/2</em>.</li> + <li>The query is split into two subqueries (say <em>q<sub>1</sub></em> and <em>q<sub>2</sub></em>).</li> + <li>Each of these two queries are compared with every query in their corresponding set of <em>subkeys</em>.</li> + <li>For example, <em>q<sub>1</sub></em> is compared with every query is <em>Q<sub>1</sub></em>.</li> + <li>Top <em>k</em> ranked keys are selected from each set to create two new sets <em>C<sub>1</sub></em> and <em>C2<sub>2</sub></em>.</li> + <li>The keys from these two sets are combined uder the concatenation operator to obtain <em>k<sub>2</sub></em> vectors.</li> + <li>the final top <em>k</em> (concatenated) keys are searched from these *k<sup>2* keys.</sup></li> + <li>The overall complexity is $O((\sqrt K + k^2) \times d_q)$ where <em>K</em> is the total number of keys (whiuc)</li> + </ul> + </li> + <li> + <p><strong>Value lookup table</strong></p> + + <ul> + <li>The values (corresponding to selected subkeys) are aggregated (using weighted sum operation) to obtain the output.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>All the parameters are trainable, though, in practice, only the selected <em>k</em> memory slots are updated.</p> + </li> + <li> + <p>Using Multihead attention mechanism helps to improve the performance further.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>1 or more feedforward layers in transformers are placed by the memory layers.</p> + </li> + <li> + <p>The model is evaluated on large scale language modeling tasks with 140 Gb of data from common crawl corpora (28n billion words).</p> + </li> + <li> + <p>Evaluation metrics</p> + + <ul> + <li> + <p>Perplexity on the test set.</p> + </li> + <li> + <p>Fraction of accessed values.</p> + </li> + <li> + <p>KL divergence between the (normalized) weights of key access and uniform distribution.</p> + </li> + <li> + <p>The last two metrics are used together to determine how well the keys are utilized.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Given the large size of the training dataset, adding more layers to the transformer model helps.</p> + </li> + <li> + <p>Effect of using memory layer is more powerful than the effect of adding new layers to the transformer. For example, a 12 layer transformer + memory layer outperforms a 24 layer transformer while being almost twice as fast.</p> + </li> + <li> + <p>The best position to place the memory is at an intermediate layer and placing the memory layer right after the input or just before the softmax layer does not work well in practice.</p> + </li> +</ul> + + + + + + Abductive Commonsense Reasoning + + 2019-08-15T00:00:00-04:00 + /site/2019/08/15/Abductive Commonsense Reasoning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents the task of abductive NLP (pronounced as <em>alpha NLP</em>) where the model needs to perform abductive reasoning.</p> + </li> + <li> + <p>Abductive reasoning is the inference to the most plausible explanation. Even though it is considered to be an important component for understanding narratives, the work in this domain is sparse.</p> + </li> + <li> + <p>A new dataset called as Abstractive Reasoning in narrative Text (ART) consisting of 20K narrative contexts and 200k explanations is also provided. The dataset models the task as multiple-choice questions to make the evaluation process easy.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1908.05739">Link to the paper</a></p> + </li> +</ul> + +<h2 id="task-setup">Task Setup</h2> + +<ul> + <li> + <p>Given a pair of observations <em>O<sub>1</sub></em> and <em>O<sub>2</sub></em> and two hypothesis <em>h<sub>1</sub></em> and <em>h<sub>2</sub></em>, the task is to select the most plausible hypothesis.</p> + </li> + <li> + <p>In general, <em>P(h | O<sub>1</sub>, O<sub>2</sub>)</em> is propotional to <em>P(h |O<sub>1</sub>)P(O<sub>2</sub>|h, O<sub>1</sub>)</em>.</p> + </li> + <li> + <p>Different independence assumptions can be imposed on the structure of the problem eg one assumption could be that the hypothesis is independent of the observations or the “fully connected” assumption would jointly model both the observations and the hypothesis.</p> + </li> +</ul> + +<h2 id="dataset">Dataset</h2> + +<ul> + <li> + <p>Along with crowdsourcing several plausible hypotheses for each observation instance pair, an adversarial filtering algorithm (AF) is used to remove weak pairs of hypothesis.</p> + </li> + <li> + <p>Observation pairs are created using the <a href="https://aclweb.org/anthology/N16-1098">ROCStories dataset</a> which is a collection of short, manually crafted stories of 5 sentences.</p> + </li> + <li> + <p>The average word length for both the content and the hypothesis is between 8 to 9.</p> + </li> + <li> + <p>To collect plausible hypothesis, the crowd workers were asked to fill in a plausible “in-between” sentence in natural language.</p> + </li> + <li> + <p>Given the plausible hypothesis, the crowd workers were asked to create an implausible hypothesis by editing fewer than 6 words.</p> + </li> + <li> + <p>Adversarial filtering approach from <a href="https://aclweb.org/anthology/D18-1009">Zellers et al.</a> is used with BERT as the adversary. A temperature parameter is introduced to control the maximum number of instances that can be changed in each adversarial filtering iteration.</p> + </li> +</ul> + +<h2 id="key-observations">Key Observations</h2> + +<ul> + <li> + <p>Human performance: 91.4%</p> + </li> + <li> + <p>Baselines like SVM classifier, the bag-of-words classifier (using Glove) and max-pooling overt BiLSTM representation: approx 50%</p> + </li> + <li> + <p>Entailment NLI baseline: 59%. This highlights the additional complexity of abductive NLI as compared to entailment NLI.</p> + </li> + <li> + <p>BERT: 68.9%</p> + </li> + <li> + <p>GPT: 63.1%</p> + </li> + <li> + <p>Numerical and spatial knowledge-based data points are particularly hard.</p> + </li> + <li> + <p>The model is more likely to fail when the narrative created by the incorrect hypothesis is plausible</p> + </li> +</ul> + + + + + + Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models + + 2019-08-08T00:00:00-04:00 + /site/2019/08/08/Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a new algorithm called as Probabilistic Ensemble with Trajectory sampling (PETS) that combines uncertainty aware deep learning models (ensemble of deep learning models that encode uncertainty) with sampling-based uncertainty propagation.</p> + </li> + <li> + <p>PETS improves over other probabilistic MBRL approaches by isolating epistemic uncertainty (due to limited training data) and aleatoric uncertainty (inherent in the system).</p> + </li> + <li> + <p><a href="">Link to the paper</a></p> + </li> +</ul> + +<h2 id="uncertainty-aware-neural-network-dynamics-model">Uncertainty-Aware Neural Network Dynamics Model</h2> + +<ul> + <li> + <p>Aleatoric uncertainty can be accounted for by learning a parameterized distribution (probabilistic neural network) trained with negative log-likelihood.</p> + </li> + <li> + <p>Epistemic uncertainty can be accounted for by either having an infinite amount of data or by using ensembles.</p> + </li> + <li> + <p>The paper uses a neural network to predict the mean and standard deviation of a gaussian distribution which defines the predictive model. This setup is referred to as the “probabilistic” model and denoted by <strong>P</strong>.</p> + </li> + <li> + <p>The alternate setup of the deterministic model is where a neural network is used to make a point prediction (and is denoted by <strong>D</strong>).</p> + </li> + <li> + <p>Ensemble of probabilistic models is denoted as <strong>PE</strong> while that of deterministic models is denoted as <strong>DE</strong>.</p> + </li> +</ul> + +<h2 id="planning-and-control-with-learned-dynamics">Planning and Control with learned Dynamics</h2> + +<ul> + <li> + <p>Model Predictive Control (MPC) is used for planning.</p> + </li> + <li> + <p>Given a start state and an action sequence, the probabilistic dynamics model induces a distribution over the trajectories.</p> + </li> + <li> + <p>The first action, among the sequence of optimized actions, is executed.</p> + </li> + <li> + <p>Instead of random shooting, <a href="https://www.sciencedirect.com/science/article/pii/B9780444538598000035">Cross Entropy Method (CEM)</a> is used.</p> + </li> +</ul> + +<h2 id="trajectory-sampling">Trajectory Sampling</h2> + +<ul> + <li> + <p>Let us say there are B bootstrap models in the ensemble. Given the current state, P particles are created and each particle is propagated using one of the bootstrap models. Two variants are considered:</p> + + <ul> + <li> + <p>TS1 - At each timestep, each particle samples a bootstrap. In this case, particle separation can not be attributed to ti the compounding effects of the bootstraps.</p> + </li> + <li> + <p>TS$\infty$ - The bootstrapped model (per particle) is sampled just once and is not changed after that. This setup separates aleatoric and epistemic uncertainty. Aleatoric state variance is the average variance of the particles of the same bootstrap while epistemic state variance is the variance of the average of particles of same bootstrap indexes.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="result">Result</h2> + +<ul> + <li> + <p>The proposed approach reaches the asymptotic performance of state-of-the-art model-free algorithms in much fewer samples.</p> + </li> + <li> + <p>The general performance trend is probabilistic emsemble &gt; probabilisitc model &gt; deterministc ensemble &gt; determinisitc model./.</p> + </li> + <li> + <p>Initial experiments for learning policy by propagating gradients through the ensemble of models did not work and has been left as future work.</p> + </li> +</ul> + + + + + Assessing Generalization in Deep Reinforcement Learning + + 2019-08-01T00:00:00-04:00 + /site/2019/08/01/Assessing Generalization in Deep Reinforcement Learning + <ul> + <li> + <p>The paper presents a benchmark and experimental protocol (environments, metrics, baselines, training/testing setup) to evaluate RL algorithms for generalization.</p> + </li> + <li> + <p>Several RL algorithms are evaluated and the key takeaway is that the “vanilla” RL algorithms can generalize better than the RL algorithms that are specifically designed to generalize, given enough diversity in the distribution of the training environments.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1810.12282">Link to the paper</a></p> + </li> + <li> + <p>The focus is on evaluating generalization to environmental changes that affect the system dynamics (and not the goal or rewards).</p> + </li> + <li> + <p>Two generalization regimes are considered:</p> + + <ul> + <li> + <p>Interpolation - parameters of the test environment are similar to the parameters of the training environment.</p> + </li> + <li> + <p>Extrapolation - parameters of the test environment are different from the parameters of the training environment.</p> + </li> + </ul> + </li> + <li> + <p>Following algorithms are considered as part of the benchmark:</p> + + <ul> + <li> + <p>“Vanilla” RL algorithms - A2C, PPO</p> + </li> + <li> + <p>RL algorithms that are designed to generalize:</p> + + <ul> + <li> + <p>EPOpt - Learn a (robust) policy that maximizes the expected reward over the most difficult distribution of environments (ones with the worst expected reward).</p> + </li> + <li> + <p>RL<sup>2</sup> - Learn an (adaptive) policy that can adapt to the current environment/task by considering the trajectory and not just the state transition sequence.</p> + </li> + </ul> + </li> + <li> + <p>These specially designed RL algorithms can be optimized using either A2C or PPO leading to combinations like EPOpt-A2C or EPOpt-PPO etc.</p> + </li> + <li> + <p>The models are either composed of feedforward networks completely or feedforward + recurrent networks.</p> + </li> + </ul> + </li> + <li> + <p>Environments</p> + + <ul> + <li> + <p>CartPole, MountainCar, Acrobot, and Pendulum from OpenAI Gym.</p> + </li> + <li> + <p>HalfCheetah and Hopper from OpenAI Roboschool.</p> + </li> + <li> + <p>Three versions of each environment are considered:</p> + + <ul> + <li> + <p>Deterministic: Environment parameters are fixed. This case corresponds to the standard environment setup in classical RL.</p> + </li> + <li> + <p>Random: Environment parameters are sampled randomly. This case corresponds to sampling from a distribution of environments.</p> + </li> + <li> + <p>Extreme: Environment parameters are sampled from their extreme values. This case corresponds to the edge-case environments which would not be encountered during training generally.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Performance Metrics</p> + + <ul> + <li> + <p>Average total reward per episode.</p> + </li> + <li> + <p>Success percentage: Percentage of episodes where a certain goal (or reward) is obtained.</p> + </li> + </ul> + </li> + <li> + <p>Evaluation Metrics/Setups</p> + + <ul> + <li> + <p>Default: success percentage when training and evaluating the deterministic version of the environment.</p> + </li> + <li> + <p>Interpolation: success percentage when training and evaluating on the random version of the environment.</p> + </li> + <li> + <p>Extrapolation: the geometric mean of the success percentage of following three versions:</p> + + <ul> + <li> + <p>Train on deterministic and evaluate on the random version.</p> + </li> + <li> + <p>Train on deterministic and evaluate on extreme version.</p> + </li> + <li> + <p>Train on random and evaluate on the extreme version.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Observations</p> + + <ul> + <li> + <p>Extrapolation is harder than interpolation.</p> + </li> + <li> + <p>Increasing the diversity in the training environments improves the interpolation generalization of vanilla RL methods.</p> + </li> + <li> + <p>EPOpt improves generalization only for continuous control environments and only with PPO.</p> + </li> + <li> + <p>RL<sup>2</sup> is difficult to train on the environments considered and did not provide a clear advantage in terms of generalization.</p> + </li> + <li> + <p>EPOpt-PPO outperforms PPO on only 3 environments and EPOpt-A2C does not</p> + </li> + </ul> + </li> +</ul> + + + + + + Quantifying Generalization in Reinforcement Learning + + 2019-07-25T00:00:00-04:00 + /site/2019/07/25/Quantifying Generalization in Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces a new, procedurally generated environment called as CoinRun that is designed to benchmark the generalization capabilities of RL algorithms.</p> + </li> + <li> + <p>The paper reports that deep convolutional architectures and techniques like L2 regularization, batch norm, etc (which were proposed in the context of generalization in supervised learning) are also useful for RL.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1812.02341">Link to the paper</a></p> + </li> +</ul> + +<h2 id="coinrun-environment">CoinRun Environment</h2> + +<ul> + <li> + <p>CoinRun is made of multiple levels.</p> + </li> + <li> + <p>In each level, the agent spawns on the far left side and needs to collect a single coin that lies on the far right side.</p> + </li> + <li> + <p>There are many obstacles in between and colliding with an obstacle leads to agent’s death.</p> + </li> + <li> + <p>Each episode extends for a maximum for 1000 steps.</p> + </li> + <li> + <p>CoinRun is designed such that given sufficient training time and levels, a near-optimal policy can be learned for all the levels.</p> + </li> +</ul> + +<h2 id="generalization">Generalization</h2> + +<ul> + <li> + <p>Generalization can be measure by training an agent on a given set of training tasks and evaluating on an unseen set of test tasks.</p> + </li> + <li> + <p>9 agents are trained to play CoinRun, on different training sets (each with a different number of levels).</p> + </li> + <li> + <p>The first 8 agents are trained on sets of size 100 to 16000 levels while the last agent is trained on an unbounded set of levels.</p> + </li> + <li> + <p>Training a model on an unbounded set of levels provides a good proxy for the train-to-test generalization performance.</p> + </li> +</ul> + +<h2 id="evaluating-architectures">Evaluating Architectures</h2> + +<ul> + <li> + <p>Two convolutional architectures (of different sizes) are compared:</p> + + <ul> + <li> + <p>Nature-CNN: The CNN architecture used in the <a href="https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf">Deep Q Network</a>. This is the smaller network among the two models.</p> + </li> + <li> + <p>IMPALA-CNN: The CNN architecture used in the <a href="https://arxiv.org/abs/1802.01561">Imapla architecture</a>.</p> + </li> + </ul> + </li> + <li> + <p>IMPALA-CNN agent always outperforms the Nature-CNN agent indicating that larger architecture has more capacity for generalization. But increasing the network size beyond a limit gives diminishing returns.</p> + </li> +</ul> + +<h2 id="evaluating-regularization">Evaluating Regularization</h2> + +<ul> + <li> + <p>While both L2 regularization and Dropout helps to improve generalization, L2 regularization is more impactful.</p> + </li> + <li> + <p>A domain randomization/data augmentation approach is tested where rectangular regions of different sizes are masked and assigned a random color. This approach seems to improve performance.</p> + </li> + <li> + <p>Batch Normalization helps to improve performance as well.</p> + </li> + <li> + <p>Environment stochasticity is introduced by using sticky actions while policy stochasticity is introduced by controlling the entropy bonus. Both these forms of stochasticity boost performance.</p> + </li> + <li> + <p>While combining different regularization methods help, the gains are only marginally better than using just 1 regularization approach. This suggests that these different approaches induce similar generalization properties.</p> + </li> +</ul> + +<h2 id="additional-environments">Additional Environments</h2> + +<ul> + <li> + <p>Two additional environments are also considered to verify the high degree of overfitting observed in the CoinRun environment:</p> + + <ul> + <li> + <p>CoinRun-Platforms:</p> + + <ul> + <li> + <p>Unlike CoinRun, each episode can have multiple coins and the time limit is 0increased to 1000 steps.</p> + </li> + <li> + <p>Levels are larger as well so the agent might need to backtrack their steps.</p> + </li> + </ul> + </li> + <li> + <p>RandomMazes:</p> + + <ul> + <li> + <p>Partially observed environment with square mazes of dimensions 3x3 to 25x25.</p> + </li> + <li> + <p>Timelimit of 500 steps</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Overfitting is observed for both these environments as well.</p> + </li> +</ul> + + + + + Set Transformer - A Framework for Attention-based Permutation-Invariant Neural Networks + + 2019-07-18T00:00:00-04:00 + /site/2019/07/18/Set Transformer A Framework for Attention-based Permutation-Invariant Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Consider problems where the input to the model is a set. In such problems (referred to as the set-input problems), the model should be invariant to the permutation of the data points.</p> + </li> + <li> + <p>In “set pooling” methods (<a href="https://arxiv.org/abs/1606.02185">1</a>, <a href="https://arxiv.org/abs/1703.06114">2</a>), each data point (in the input set) is encoded using a feed-forward network and the resulting set of encoded representations are pooled using the “sum” operator.</p> + </li> + <li> + <p>This approach can be shown to be bot permutation-invariant and a universal function approximator.</p> + </li> + <li> + <p>The paper proposes an attention-based network module, called as the Set Transformer, which can model the interactions between the elements of an input set while being permutation invariant.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1810.00825">Link to the paper</a></p> + </li> +</ul> + +<h2 id="transformer">Transformer</h2> + +<ul> + <li> + <p>An attention function <em>Attn(Q, K, V) = (QK<sup>T</sup>)V</em> is used to map queries <em>Q</em> to output using key-value pairs <em>K, V</em>.</p> + </li> + <li> + <p>In case of multi-head attention, the key, query, and value are projected into <em>h</em> different vectors and attention is applied on all these vectors. The output is a linear transformation of the concatenation of all the vectors.</p> + </li> +</ul> + +<h2 id="set-transformer">Set Transformer</h2> + +<ul> + <li> + <p>3 modules are introduced: MAB, SAB and ISAB.</p> + </li> + <li> + <p>Multihead Attention Block (MAB) is a module very similar to to the encoder in the Transformer, without the positional encoding and dropout.</p> + </li> + <li> + <p>Set Attention Block (SAB) is a module that takes as input a set and performs self-attention between the elements of the set to produce another set of the same size ie <em>SAB(X) = MAB(X, X)</em>.</p> + </li> + <li> + <p>The time complexity of the SAB operation is <em>O(n<sup>2</sup>)</em> where <em>n</em> is the number of elements in the set. It can be reduced to <em>O(m*n)</em> by using Induced Set Attention Blocks (ISAB) with <em>m</em> induced point vectors (denoted as I).</p> + </li> + <li> + <p><em>ISAB<sub>m</sub> = MAB(X, MAB(I, X))</em>.</p> + </li> + <li> + <p>ISAB can be seen as performing a low-rank projection of inputs.</p> + </li> + <li> + <p>These modules can be used to model the interactions between data points in any given set.</p> + </li> +</ul> + +<h2 id="pooling-by-multihead-attention-pma">Pooling by Multihead Attention (PMA)</h2> + +<ul> + <li> + <p>Aggregation is performed by applying multi-head attention on a set of <em>k</em> seed vectors.</p> + </li> + <li> + <p>The interaction between the <em>k</em> outputs (from PMA) can be modeled by applying another SAB.</p> + </li> + <li> + <p>Thus the entire network is a stack of SABs and ISABs. Both the modules are permutation invariant and so is any network obtained by stacking them.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets include:</p> + + <ul> + <li>Predicting the maximum value from a set.</li> + <li>Counting unique (Omniglot) characters from an image.</li> + <li>Clustering with a mixture of Gaussians (synthetic points and CIFAR 100).</li> + <li>Set Anomaly detection (celebA).</li> + </ul> + </li> + <li> + <p>Generally, increasing <em>m</em> (the number of inducing datapoints) improve performance, to some extent. This is somewhat expected.</p> + </li> + <li> + <p>The paper considers various ablations of the proposed approach (like disabling attention in the encoder or pooling layer) and shows that attention mechanism is needed during both the stages.</p> + </li> + <li> + <p>The work has two main benefits over prior work:</p> + + <ul> + <li> + <p>Reducing the <em>O(n<sup>2</sup>)</em> complexity to <em>O(m*n)</em> complexity.</p> + </li> + <li> + <p>Using self-attention mechanism for both encodings the inputs and for aggregating the encoded representations.</p> + </li> + </ul> + </li> +</ul> + + + + + Measuring abstract reasoning in neural networks + + 2019-06-27T00:00:00-04:00 + /site/2019/06/27/Measuring Abstract Reasoning in Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a dataset to diagnose the abstract reasoning capabilities of learning systems.</p> + </li> + <li> + <p>The paper shows that a variant of the relational networks, explicitly designed for abstract reasoning, outperforms models like ResNets.</p> + </li> + <li> + <p><a href="http://proceedings.mlr.press/v80/santoro18a/santoro18a.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>Visual reasoning tasks, that are inspired by the human IQ test, are used to evaluate the models in terms of generalization.</p> + </li> + <li> + <p>Let’s say that we want to test if the model understands the abstract notion of “increasing”. We could train the model on data that captures the notion of “increasing”, in terms of say increasing size (or quantities) of objects and then test it on a dataset where the notion is expressed in terms of increasing intensity of color.</p> + </li> + <li> + <p>The dataset is then used to evaluate if the models can find any solution to such abstract reasoning tasks and how well they generalize when the abstract content is specifically controlled.</p> + </li> +</ul> + +<h2 id="dataset">Dataset</h2> + +<h3 id="ravens-progressive-matrics-rpms">Raven’s Progressive Matrics (RPMs):</h3> + +<ul> + <li> + <p>Consists of an incomplete 3x3 matrix of images where the missing image needs to be filled in, typically by choosing from a set of candidate images.</p> + </li> + <li> + <p>As such, it is possible to justify multiple answers to be correct though, in practice, the right answer is the one with the simplest explanation.</p> + </li> +</ul> + +<h3 id="procedurally-generated-matrices-pgms">Procedurally Generated Matrices (PGMs)</h3> + +<ul> + <li> + <p>Generating RPM like matrices procedurally by building an abstract structure for matrices.</p> + </li> + <li> + <table> + <tbody> + <tr> + <td>The abstract structure <em>S</em> consists of 3 components: (i) Relation types (<em>R</em>), (ii) Object types (<em>O</em>) and (iii) Attribute types (<em>A</em>). ie *S = {(r, o, a)</td> + <td>r in R, o in O and a in A}*.</td> + </tr> + </tbody> + </table> + </li> + <li> + <p>This can be read as: “Structure <em>S</em> is instantiated on attribute <em>a</em> of object <em>o</em> and exhibits the relation <em>r</em>”. For example, <em>S</em> is instantiated on “color” of object “shape” and exhibits the relation “increasing”.</p> + </li> + <li> + <p>In general, the structure could be made of more than one such tuple and more are the tuples, harder is the task.</p> + </li> + <li> + <p>Given the structure, sample values <em>v</em> for each attribute <em>a</em> while conforming with the relation <em>r</em>. For example, if the attribute is “color” and the relation is “increasing”, the intensity of color must increase.</p> + </li> + <li>The resulting structure is rendered as pixels.</li> +</ul> + +<h2 id="test-for-generalization">Test for Generalization</h2> + +<ul> + <li> + <p>The paper tests for the following generalization scenarios:</p> + </li> + <li> + <p>Neutral: The structure of the training and test data can contain any tuple.</p> + </li> + <li> + <p>Interpolation: The training data contains even-indexed members of the attribute values while the test data contains odd-indexed members of the attribute values.</p> + </li> + <li> + <p>Extrapolation: The training data contains first-half of the attribute values while the test data contains the second-half of the attribute values.</p> + </li> + <li> + <p>Heldout attribute: Training data contains no tuples with (o = shape, a = color) or (o = line, a = type).</p> + </li> + <li> + <p>Heldout triples: Out of 29 possible triples, 7 are held out from training and only used during testing.</p> + </li> + <li> + <p>Heldout pair-of-triples: Out of 400 possible sets of pair of triples, 40 were held out and used only during testing.</p> + </li> + <li> + <p>Heldout pair-of-triples: Out of 400 possible sets of pair of triples, 40 were held out and used only during testing.</p> + </li> + <li> + <p>Heldout attribute pair: Out of 20 (unordered) variable attribute pairs, 4 were held out and used only during testing.</p> + </li> +</ul> + +<h2 id="models">Models</h2> + +<ul> + <li> + <p><strong>Input</strong>: 8 context panels (from the 3x3) matrix where the last panel needs to be filled.</p> + </li> + <li> + <p>CNN-MLP - 4 layer CNN with batchnorm and ReLU.</p> + </li> + <li> + <p>ResNet - ResNet-50 (as it perfomed better than ResNet-101 and ResNet-152).</p> + </li> + <li> + <p>LSTM</p> + </li> + <li> + <p>Wild Relation Network (WReN) - A CNN model encodes the 8 panels and the candidate answers and feeds them as input to a relational network.</p> + </li> + <li> + <p>Context-blind ResNet - ResNet network without the context (or the 8 input panels).</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>WReN model outperforms the other models on the Neutral setup.</p> + </li> + <li> + <p>Models have a harder time differentiating between size than quantity.</p> + </li> + <li> + <p>WRen is the best performing models in all the setups and rest of the discussion only applies to that model.</p> + </li> + <li> + <p>Generalisation is easy in the context of interpolation while worst in case of extrapolation, hinting at the limited generalization capability of the models.</p> + </li> +</ul> + +<h2 id="auxiliary-training">Auxiliary Training</h2> + +<ul> + <li> + <p>The model is also trained to predict the relevant relation, object and attribute types using the meta-targets that encode this information.</p> + </li> + <li> + <p>The auxiliary training helps in all the cases. Further, the model’s accuracy on the main task is where in the cases where it solves the auxiliary tasks well.</p> + </li> +</ul> + +<h2 id="key-takeaway">Key Takeaway</h2> + +<ul> + <li> + <p>For abstract visual reasoning tasks, the choice of models can make a large difference, the case in consideration of ResNets vs Relational Networks.</p> + </li> + <li> + <p>Using auxiliary loss that encourages the model to “explain” its reasoning (in this case by predicting the attributes, relations, etc) helps to improve the performance on the main task as well.</p> + </li> + <li> + <p>Given that the challenge is motivated by tasks used to measure human IQ, it would have been interesting to get an estimate of human performance on at least a subset of this dataset.</p> + </li> +</ul> + + + + + Hamiltonian Neural Networks + + 2019-06-20T00:00:00-04:00 + /site/2019/06/20/Hamiltonian Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a very cool idea at the intersection of deep learning and physics.</p> + </li> + <li> + <p>The idea is to train a neural network architecture that builds on the concept of Hamiltonian Mechanics (from Physics) to learn physical conservation laws in an unsupervised manner.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1906.01563">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/greydanus/hamiltonian-nn">Link to the code</a></p> + </li> + <li> + <p><a href="https://greydanus.github.io/2019/05/15/hamiltonian-nns/">Link to author’s blog</a></p> + </li> +</ul> + +<h2 id="hamiltonian-mechanics">Hamiltonian Mechanics</h2> + +<ul> + <li> + <p>It is a branch of physics that can describe systems which follow some conservation laws and invariants.</p> + </li> + <li> + <p>Consider a set of <em>N</em> pair of coordinates [(q<sub>1</sub>, p<sub>1</sub>), …, (q<sub>N</sub>, p<sub>N</sub>)] where <strong>q</strong> = [q<sub>1</sub>, …, q<sub>N</sub>] dnotes the position of the set of objects while <strong>p</strong> = [p<sub>1</sub>, …, p<sub>N</sub>] denotes the momentum of the set of variables.</p> + </li> + <li> + <p>Together these <em>N</em> pairs completely describe the system.</p> + </li> + <li> + <p>A scalar function <em>H(<strong>q</strong>, <strong>p</strong>)</em>, called as the Hamiltonian is defined such that the partial derivative of <em>H</em> with respect to <strong>p</strong> is equal to derivative of <strong>q</strong> with respect to time <em>t</em> and the negative of partial derivative of <em>H</em> with respect to <strong>q</strong> is equal to derivative of <strong>p</strong> with respect to time <em>t</em>.</p> + </li> + <li> + <p>This can be expressed in the form of the equation as follows:</p> + </li> +</ul> + +<p><img src="https://raw.githubusercontent.com/shagunsodhani/papers-I-read/master/assets/HNN/equation1.png" alt="equation1" width="100" height="100" /></p> + +<ul> + <li>The Hamiltonian can be tied to the total energy of the system and can be used in any system where the total energy is conserved.</li> +</ul> + +<h2 id="hamiltonian-neural-network-hnn">Hamiltonian Neural Network (HNN)</h2> + +<ul> + <li> + <p>The Hamiltonian <em>H</em> can be parameterized using a neural network and can learn conserved quantities from the data in an unsupervised manner.</p> + </li> + <li> + <p>The loss function looks as follows:</p> + </li> +</ul> + +<p><img src="https://raw.githubusercontent.com/shagunsodhani/papers-I-read/master/assets/HNN/equation2.png" alt="equation2" width="400" height="50" /></p> + +<ul> + <li>The partial derivatives can be obtained by computing the <em>in-graph</em> gradient of the output variables with respect to the input variables.</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>For setups where the energy must be conserved exactly, (eg ideal mass-spring and ideal pendulum), the HNN learn to preserve an energy-like scalar.</p> + </li> + <li> + <p>For setups where the energy need not be conserved exactly, the HNNs still learn to preserve the energy thus highlighting a limitation of HNNs.</p> + </li> + <li> + <p>In case of two body problems, the HNN model is shown to be much more robust when making predictions over longer time horizons as compared to the baselines.</p> + </li> + <li> + <p>In the final experiment, the model is trained on pixel observations and not state observations. In this case, two auxiliary losses are added: auto-encoder reconstruction loss and a loss on the latent space representations. Similar to the previous experiments, the HNN model makes robust predictions over much longer time horizons.</p> + </li> +</ul> + + + + + Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations + + 2019-06-13T00:00:00-04:00 + /site/2019/06/13/Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a new inverse RL (IRL) algorithm, called as Trajectory-ranked Reward EXtrapolation (T-REX) that learns a reward function from a collection of ranked trajectories.</p> + </li> + <li> + <p>Standard IRL approaches aim to learn a reward function that “justifies” the demonstration policy and hence those approaches cannot outperform the demonstration policy.</p> + </li> + <li> + <p>In contrast, T-REX aims to learn a reward function that “explains” the ranking over demonstrations and can learn a policy that outperforms the demonstration policy.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1904.06387">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The input is a sequence of trajectories <em>T<sub>1</sub>, … T<sub>m</sub></em> which are ranked in the order of preference. That is, given any pair of trajectories, we know which of the two trajectories is better.</p> + </li> + <li> + <p>The setup is to learn from observations where the learning agent does not have access to the true reward function or the action taken by the demonstration policy.</p> + </li> + <li> + <p>Reward Inference</p> + + <ul> + <li> + <p>A parameterized reward function <em>r<sub>θ</sub></em> is trained with the ranking information using a binary classification loss function which aims to predict which of the two given trajectory would be ranked higher.</p> + </li> + <li> + <p>Given a trajectory, the reward function predicts the reward for each state. The sum of rewards (corresponding to the two trajectories) is used used to predict the preferred trajectory.</p> + </li> + <li> + <p>T-REX uses partial trajectories instead of full trajectories as a data augmentation strategy.</p> + </li> + </ul> + </li> + <li> + <p>Policy Optimization</p> + + <ul> + <li>Once a reward function has been learned, standard RL approaches can be used to train a new policy.</li> + </ul> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>Environments: Mujoco (Half Cheetah, Ant, Hooper), Atari</p> + </li> + <li> + <p>Demonstrations generated using PPO (checkpointed at different stages of training).</p> + </li> + <li> + <p>Ensemble of networks used to learn the reward functions.</p> + </li> + <li> + <p>The proposed approach outperforms the baselines <a href="https://arxiv.org/abs/1805.01954">Behaviour Cloning from Observations</a> and <a href="https://arxiv.org/abs/1606.03476">Generative Adversarial Imitation Learning</a>.</p> + </li> + <li> + <p>In terms of reward extrapolation, T-REX can predict the reward for trajectories which are better than the demonstration trajectories.</p> + </li> + <li> + <p>Some ablation studies considered the effect of adding noise (random swapping the preference between trajectories) and found that the model is somewhat robust to noise up to an extent.</p> + </li> +</ul> + + + + + Meta-Reinforcement Learning of Structured Exploration Strategies + + 2019-06-08T00:00:00-04:00 + /site/2019/06/08/Meta-Reinforcement Learning of Structured Exploration Strategies + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper looks at the problem of learning structured exploration policies for training RL agents.</p> + </li> + <li> + <p>Link to the <a href="https://arxiv.org/abs/1802.07245">paper</a></p> + </li> +</ul> + +<h2 id="structured-exploration">Structured Exploration</h2> + +<ul> + <li> + <p>Consider a stochastic, parameterized policy π<sub>θ</sub>(a|s) where θ represents the <em>policy-parameters</em>.</p> + </li> + <li> + <p>To encourage exploration, noise can be added to the policy at each time step t. But the noise added in such a manner does not have any notion of temporal coherence.</p> + </li> + <li> + <p>Another issue is that if the policy is represented by a simple distribution (say parameterized unimodal Gaussian), it can not model complex time-correlated stochastic processes.</p> + </li> + <li> + <p>The paper proposes to condition the policy on per-episode random variables (z) which are sampled from a learned latent distribution.</p> + </li> + <li> + <p>Consider a distibution over the tasks p(T). At the start of any episode of the i<sup>th</sup> task, a latent variable z<sub>i</sub> is sampled from the distribution <em>N(μ<sub>i</sub>, σ<sub>i</sub>)</em> where μ<sub>i</sub> and σ<sub>i</sub> are the learned parameters of the distribution and are referred to as the <em>variation parameters</em>.</p> + </li> + <li> + <p>Once sampled, the same <em>z<sub>i</sub></em> is used to condition the policy for as long as the current episode lasts and the action is sampled from then distribution π<sub>θ</sub>(a|s, z<sub>i</sub>).</p> + </li> + <li> + <p>The intuition is that the latent variable z<sub>i</sub> would encode the notion of a task or goal that does not change arbitrarily during the episode.</p> + </li> +</ul> + +<h2 id="model-agnostic-exploration-with-structured-noise">Model Agnostic Exploration with Structured Noise</h2> + +<ul> + <li> + <p>The paper focuses on the setting where the structured exploration policies are to be learned while leveraging the learning from prior tasks.</p> + </li> + <li> + <p>A meta-learning approach, called as model agnostic exploration with structured noise (MAESN) is proposed to learn a good initialization of the <em>policy-parameters</em> and to learn a latent space (for sampling the z from) that can inject structured stochasticity in the policy.</p> + </li> + <li> + <p>General meta-RL approaches have two limitations when it comes to “learning to explore”:</p> + + <ul> + <li>Casting meta-RL problems as RL problems lead to policies that do not exhibit sufficient variability to explore effectively.</li> + <li>Many current approaches try to meta-learn the entire learning algorithm which limits the asymptotic performance of the model.</li> + </ul> + </li> + <li> + <p>Idea behind MAESN is to meta-train <em>policy-parameters</em> so that they learn to use the task-specific <em>latent variables</em> for exploration and can quickly adapt to a new task.</p> + </li> + <li> + <p>An important detail is that the parameters are optimized to maximize the expected rewards after one step of gradient update to ensure that the policy uses the latent variables for exploration.</p> + </li> + <li> + <p>For every iteration of meta-training, an “inner” gradient update is performed on the variational parameters and the <em>post-inner-update</em> parameters are used to perform the meta-update.</p> + </li> + <li> + <p>The authors report that performing the “inner” gradient update on the <em>policy-parameters</em> does not help the overall learning objective and that the step size for each parameter had to be meta-learned.</p> + </li> + <li> + <p>The variation parameters have the usual KL divergence loss which encourages them to be close to the prior distribution (unit Gaussian in this case).</p> + </li> + <li> + <p>After training, the <em>variational parameters</em> for each task are quite close to the prior probably because the training objective optimizes for the expected reward after one step of gradient descent on the <em>variational parameters</em>.</p> + </li> + <li> + <p>Another implementation detail is that reward shaping is used to ensure that the policy gets useful signal during meta-training. To be fair to the baselines, reward shaping is used while training baselines as well. Moreover, the policies trained with reward shaping generalizes to sparse reward setup as well (during meta-test time).</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Three tasks distributions: Robotic Manipulation, Wheeled Locomotion, and Legged Locomotion. Each task distribution has 100 meta-training tasks.</p> + </li> + <li> + <p>In the Manipulation task distribution, the learner has to push different blocks from different positions to different goal positions. In the Locomotion task distributions, the different tasks correspond to the different goal positions.</p> + </li> + <li> + <p>The experiments show that the proposed approach can adapt to new tasks quickly and the learn coherent exploration strategy.</p> + </li> +</ul> + +<p>• In some cases, learning from scratch also provides a strong asymptotic performance although learning from scratch takes much longer.</p> + + + + + Relational Reinforcement Learning + + 2019-06-01T00:00:00-04:00 + /site/2019/06/01/Relational Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Relational Reinforcement Learning (RRL) paradigm uses relational state (and action) space and policy representation to leverage the generalization capability of relational learning for reinforcement learning.</p> + </li> + <li> + <p>The paper shows that effectiveness of RRL - in terms of generalization, sample efficiency and interplay - using box-world and StarCraft II minigames.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1806.01830">Link to the paper</a>.</p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>The main idea is to use neural network models that operate on structured representations and perform relational reasoning via iterated, message-passing style methods.</p> + </li> + <li> + <p>Use of non-local computations using a shared function (in terms of pairwise interactions between entities) provides a better inductive bias.</p> + </li> + <li> + <p>Multi-head dot product attention mechanism is used to model the pairwise interactions (with one or more attention blocks).</p> + </li> + <li> + <p>Iterative computations can be used to capture higher-order interactions between entities.</p> + </li> + <li> + <p>Entity extraction is based on the assumption that entities are things located at a particular point in space.</p> + </li> + <li> + <p>A CNN is used to parse the pixel space observation into <em>k</em> feature maps of size <em>nxn</em>. The <em>(x, y)</em> coordinates are concatenated to each <em>k-</em>dimensional pixel feature-vector to indicate the pixel’s position in the map.</p> + </li> + <li> + <p>The resulting <em>n<sup>2</sup> x k</em> matrix acts as the entity matrix.</p> + </li> + <li> + <p>Actor-critic architecture (using distributed agent IMPALA) is used.</p> + </li> +</ul> + +<h2 id="environment">Environment</h2> + +<h3 id="box-world">Box-World</h3> + +<ul> + <li> + <p>12 x 12-pixel room with keys and boxes placed randomly.</p> + </li> + <li> + <p>Agent can move in 4 directions.</p> + </li> + <li> + <p>The task is to collect gems by unlocking boxes (which may contain keys to unlock other boxes).</p> + </li> + <li> + <p>Each level has a unique sequence in which boxes need to be opened as opening the wrong box could make the level unsolvable.</p> + </li> + <li> + <p>Difficulty of a level can be controlled using: (i) Number of boxes in the path to the goal. (ii) The number of distractor branches, (iii) Length of distractor branches.</p> + </li> +</ul> + +<h3 id="starcraft-ii-minigames">StarCraft II minigames</h3> + +<ul> + <li>9 mini games designed as specific scenarios in the Starcraft game are used.</li> +</ul> + +<h2 id="results">Results</h2> + +<h3 id="box-world-1">Box-World</h3> + +<ul> + <li> + <p>RRL agents solve over 98% of the levels while the RL agent solves less than 95% of the levels.</p> + </li> + <li> + <p>Visualising the attention scores indicate that:</p> + + <ul> + <li> + <p>keys attend to locks they can unlock.</p> + </li> + <li> + <p>all objects attend to agent’s location.</p> + </li> + <li> + <p>agent and gem attend to each other (and themselves).</p> + </li> + </ul> + </li> + <li> + <p>Generalization capacity is tested in two ways:</p> + + <ul> + <li> + <p>Performance on levels that require opening a larger sequence of boxes than it is trained on.</p> + </li> + <li> + <p>Performance on levels that require key-lock combinations not seen during training.</p> + </li> + </ul> + </li> + <li> + <p>In both the scenarios, the RRL agent significantly outperforms the RL agent.</p> + </li> +</ul> + +<h2 id="starcraft">StarCraft</h2> + +<ul> + <li> + <p>RLL agent achieves better or equal results that the RL agent in all but one game.</p> + </li> + <li> + <p>For testing generalization, the agent, that was trained for controlling two marines, was transferred on the task which requires it to control 5 marines. These results are not conclusive given the high variability.</p> + </li> +</ul> + + + + + Good-Enough Compositional Data Augmentation + + 2019-05-21T00:00:00-04:00 + /site/2019/05/21/Good-Enough Compositional Data Augmentation + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces a simple data augmentation protocol that provides a good compositional inductive bias for sequential models.</p> + </li> + <li> + <p>Synthetic examples are created by taking real sequences and replacing the fragments in sequences which appear in similar environments. This operation is referred to as GECA (Good Enough Compositional Augmentation).</p> + </li> + <li> + <p>The underlying idea is that if two fragments of training examples occur in some environment, then any environment where the first fragment appears is also a valid environment for the second fragment.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1904.09545">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Discover substitutable fragments (ie pairs of fragments that co-occur with a common fragment) and use them to generate new sequences by swapping fragments.</p> + </li> + <li> + <p>The current work uses very simple criteria to decide if fragments are substitutable - fragments should occur in at least one lexical environment that is exactly the same. A lexical environment is the k-word window around each span of the fragment.</p> + </li> + <li> + <p>Though the idea can be motivated by work in generative syntax and distributional semantics, it would not hold like a physical law when applied to the real data.</p> + </li> + <li> + <p>The authors view this tradeoff as a balance between the shortage of training data vs relative frequency of mistake in the proposed data augmentation approach.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>The approach is evaluated on the SCAN dataset when the model is trained on the short sequence of English commands. Though the dataset augmentation helps the baseline models, it is not surprising given the nature of the SCAN dataset.</p> + </li> + <li> + <p>More challenging tasks (for evaluating the proposed approach) are semantic parsing (where the query is represented in the form of λ calculus or SQL and low resource language modeling. While the improvement (in terms of metrics) is sometimes limited, the gains are consistent across different datasets.</p> + </li> + <li> + <p>Given that the proposed approach is relatively simple and straightforward, it appears to be quite promising.</p> + </li> +</ul> + + + + + Multiple Model-Based Reinforcement Learning + + 2019-05-14T00:00:00-04:00 + /site/2019/05/14/Multiple Model-Based Reinforcement Learning + <ul> + <li> + <p>The paper presents some general ideas and mechanisms for multiple model-based RL. Even though the task and model architecture may not be very relevant now, I find the general idea and the mechanisms to be quite useful. As such, I am focusing only on high-level ideas and not the implementation details themselves.</p> + </li> + <li> + <p>The main idea behind Multiple Model-based RL (MMRL) is to decompose complex tasks into multiple domains in space and time so that the environment dynamics within each domain is predictable.</p> + </li> + <li> + <p><a href="https://www.mitpressjournals.org/doi/abs/10.1162/089976602753712972">Link to the paper</a></p> + </li> + <li> + <p>MMRL proposes an RL architecture composes of multiple modules, each with its own state prediction model and RL controller.</p> + </li> + <li> + <p>The prediction error from each of the state prediction model defines the “responsibility signal” for each module.</p> + </li> + <li> + <p>This responsibility signal is used to:</p> + + <ul> + <li> + <p>Weigh the state prediction output ie the predicted state is the weighted sum of individual state predictions (weighted by the responsibility signal).</p> + </li> + <li> + <p>Weigh the parameter update of the environment models as well as the RL controllers.</p> + </li> + <li> + <p>Weighing the action output - ie predicted action is a weighted sum of individual actions.</p> + </li> + </ul> + </li> + <li> + <p>The framework is amenable for incorporating prior knowledge about which module should be selected.</p> + </li> + <li> + <p>In the modular decomposition of a task, the modules should not change too frequently and some kind of spatial and temporal continuity is also desired.</p> + </li> + <li> + <p>Temporal continuity can be accounted for by using the previous responsibility signal as input during the current timestep.</p> + </li> + <li> + <p>Spatial continuity can b ensured by considering a spatial prior like the Gaussian spatial prior.</p> + </li> + <li> + <p>Though model-free methods could be used for learning the RL controllers, model-based methods could be more relevant given that the modules are learning state-prediction models as well.</p> + </li> + <li> + <p>Exploration can be ensured by using a stochastic version of greedy action selection.</p> + </li> + <li> + <p>One failure mode for such modular architectures is when a single module tries to perform well across all the tasks. The modules themselves should be relatively simplistic (eg linear models) which can learn quickly and generalize well.</p> + </li> + <li> + <p>Non-stationary hunting task in a grid world and non-linear, non-stationary control task of swinging up a pendulum provides the proof of concept for the proposed methods.</p> + </li> +</ul> + + + + + Towards a natural benchmark for continual learning + + 2019-04-09T00:00:00-04:00 + /site/2019/04/09/Towards a natural benchmark for continual learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Continual Learning paradigm focuses on learning from a non-stationary stream of data with additional desiderata - transferring knowledge from previously seen task to unseen tasks and being resilient to catastrophic forgetting - all with a fixed memory and computational budget.</p> + </li> + <li> + <p>This is in contrast to the IID (independent and identically distributed) assumption in statistical learning.</p> + </li> + <li> + <p>One common example of the non-iid data is setups involving sequential decision making - eg Reinforcement learning.</p> + </li> + <li> + <p><a href="https://marcpickett.com/cl2018/CL-2018_paper_48.pdf">Paper</a></p> + </li> +</ul> + +<h2 id="benchmark">Benchmark</h2> + +<ul> + <li> + <p>Many existing benchmarks use MNIST as the underlying dataset (eg Permuted MNIST, Split MNIST, etc). These benchmarks lack complexity and make it hard to observe positive and negative backward transfer.</p> + </li> + <li> + <p>Most works focus only on the catastrophic forgetting challenge and ignore the other issues (like computation and memory footprint, the capacity of the network, etc).</p> + </li> + <li> + <p>The paper proposes a new benchmark based on Starcraft II video game to understand the different approaches for lifelong learning.</p> + </li> + <li> + <p>The sequence of tasks is designed to be a curriculum - the learning agent stats with learning simple skills and later move to more complex tasks. These complex tasks require remembering and composing skills learned in the earlier levels.</p> + </li> + <li> + <p>To evaluate for catastrophic forgetting, the tasks are designed such that not all the skills are needed for solving each task. Hence the learning agent needs to remember skills even though they are not needed at the current level.</p> + </li> + <li> + <p>Each level comes with a fixed computational budget of episodes and each episode has a fixed time limit. Once the budget is consumed the agent has to proceed to the next level. Hence agents with better sample efficiency would benefit.</p> + </li> + <li> + <p>The benchmark supports both RL and supervised learning version. In the supervised version, expert agents (pretrained on each level) are also provided.</p> + </li> + <li> + <p>Baselines are provided for distillation (using experts): sequential training (fine tuning), Dropout and SER. None of the baseline methods achieve positive or negative backward transfer.</p> + </li> + <li> + <p>When modeled as a pure RL task, the benchmark is extremely difficult to solve.</p> + </li> + <li> + <p>The paper suggests using a metric to record the amount of learning/data required to recover performance on the previous task.</p> + </li> +</ul> + + + + + Meta-Learning Update Rules for Unsupervised Representation Learning + + 2019-04-02T00:00:00-04:00 + /site/2019/04/02/Meta-Learning Update Rules for Unsupervised Representation Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Standard unsupervised learning aims to learn transferable features. The paper proposes to learn a transferable learning rule (in an unsupervised manner) that can generalize across tasks and architectures.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1804.00222">Paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Consider training the model with supervised learning - <em>φ<sub>t+1</sub> = SupervisedUpdate(φ<sub>t</sub>, x<sub>t</sub>, y<sub>t</sub>, θ)</em>.</p> + </li> + <li> + <p>Here <em>t</em> denotes the step, <em>(x, y)</em> denotes the data points, <em>θ</em> denotes the hyperparameters of the optimizer.</p> + </li> + <li> + <p>Extending this formulation for meta-learning, one could say that <em>t</em> is the step of the inner loop, <em>θ</em> are the parameters of the meta learning model.</p> + </li> + <li> + <p>Further, the paper proposes to use <em>φ<sub>t+1</sub> = UnsupervisedUpdate(φ<sub>t</sub>, x<sub>t</sub>, θ)</em> ie <em>y<sub>t</sub></em> is not used (or even assumed to be available as this is unsupervised learning).</p> + </li> + <li> + <p>The meta update rule is used to learn the weights of a meta-model by performing SGD on the sum of <em>MetaObjective</em> over the distribution of tasks (over the course of inner loop training).</p> + </li> +</ul> + +<h2 id="model">Model</h2> + +<ul> + <li> + <p>Base model: MLP with parameters <em>φ<sub>t</sub></em></p> + </li> + <li> + <p>To ensure that it generalizes across architectures, the update rule is designed to be neural-local ie updates are a function of pre and postsynaptic neurons though, in practice, this constraint is relaxed to decorrelate neurons by using cross neural information.</p> + </li> + <li> + <p>Each neuron <em>i</em> in every layer <em>l</em> (in the base model) has an update network (MLP) which takes as input the feedforward activations, feedback weights and error signals. ie <em>h<sub>b</sub><sup>l</sup>(i) = MLP(x<sub>b</sub><sup>l</sup>(i), z<sub>b</sub><sup>l</sup>(i), v<sup>l+1</sup>, +δ<sup>l</sup>(i), θ)</em></p> + + <ul> + <li><em>b</em> - index of the minibatch</li> + <li><em>x<sup>l</sup></em> - pre non-linearity activations</li> + <li><em>z<sup>l</sup></em> - post non-linearity activations</li> + <li><em>v<sup>l</sup></em> - feedback weights</li> + <li><em>δ<sup>l</sup></em> - error signal</li> + </ul> + </li> + <li> + <p>All the update networks share the meta parameters <em>θ</em></p> + </li> + <li> + <p>The model is run in a standard feed-forward manner and the update network (corresponding to each unit) is used to generate the error signal <em>δ<sup>l</sup><sub>b</sub>(i) = lin(h<sub>b</sub><sup>l</sup>(i))</em>.</p> + </li> + <li> + <p>This loss is backpropogated using the set of learned backward weights <em>v<sup>l</sup></em> instead of the forward weights <em>w<sub>l</sub></em>.</p> + </li> + <li> + <p>The weight update <em>Δw<sub>l</sub></em> is also generated using a per-neuron update network.</p> + </li> +</ul> + +<h2 id="meta-objective">Meta Objective</h2> + +<ul> + <li> + <p>The <em>MetaObjective</em> is based on fitting a linear regression model to labeled examples with a small number of data points.</p> + </li> + <li> + <p>Given the emphasis on learning generalizable features, the weights (of linear regression) are estimated on one batch and evaluated on another batch.</p> + </li> + <li> + <p>The <em>MetaObjective</em> is to reduce the cosine distance between <em>y<sub>b</sub></em> and <em>v<sup>T</sup>x<sub>b</sub><sup>L</sup></em></p> + + <ul> + <li> + <p><em>y<sub>b</sub></em> - Actual lables on the evaluation batch</p> + </li> + <li> + <p><em>x<sub>b</sub><sup>L</sup></em> - Features of the evaluation batch (using the base model)</p> + </li> + <li> + <p><em>v</em> - parameters of the linear regression model (learned on train batch)</p> + </li> + </ul> + </li> +</ul> + +<h2 id="practical-considerations">Practical Considerations</h2> + +<ul> + <li> + <p>Meta gradients are approximated using truncated backdrop through time.</p> + </li> + <li> + <p>Increasing variation in the training dataset helps the meta optimization process. Data is augmented with shifts, rotations, and noise. Predicting these coefficients is an auxiliary (regression) task for training the meta-objective.</p> + </li> + <li> + <p>Training the system requires a lot of resources - 8 days with 512 workers.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>With standard unsupervised learning, the performance (on transfer task) starts declining after some time even though the performance (on the unsupervised task) is improving. This suggests that the objective function for the two tasks starts to mismatch.</p> + </li> + <li> + <p><em>UnsupervisedUpdate</em> leads to a better generalization as compared to both VAE and supervised learning (followed by transfer).</p> + </li> + <li> + <p><em>UnsupervisedUpdate</em> also leads to a positive transfer across domains (vision to language) when trained for a shorter duration of time (to ensure that the meta-objective does not overfit).</p> + </li> + <li> + <p><em>UnsupervisedUpdate</em> also generalizes to larger model architectures and different activation functions.</p> + </li> +</ul> + + + + + GNN Explainer - A Tool for Post-hoc Explanation of Graph Neural Networks + + 2019-03-26T00:00:00-04:00 + /site/2019/03/26/GNN Explainer - A Tool for Post-hoc Explanation of Graph Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Graph Neural Network (GNN) is a family of powerful machine learning (ML) models for graphs that can combine node information with the structural information.</p> + </li> + <li> + <p>One downside of GNNs is that their predictions are hard to interpret.</p> + </li> + <li> + <p>The paper proposes GNN Explainer model for solving the problem of interpretability.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1903.03894">Paper</a></p> + </li> +</ul> + +<h2 id="desiderata-for-gnn-explanations">Desiderata for GNN explanations</h2> + +<ul> + <li> + <p><strong>Local edge fidelity</strong> - identify the subgraph structure (ideally the smallest) that significantly affected the predictions of the GNN. ie identify the important edges in the graph (for a given prediction).</p> + </li> + <li> + <p><strong>Local node fidelity</strong> - identify the import node features and correlations in the features of the neighboring nodes.</p> + </li> + <li> + <p><strong>Single instance and multi-instance explanations</strong> - Support both single instance prediction tasks and multi-instance prediction tasks.</p> + </li> + <li> + <p><strong>Model Agnostic</strong> - Support a large family of models (ideally all)</p> + </li> + <li> + <p><strong>Task Agnostic</strong> - Support a large family of tasks (ideally all)</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>I first describe the single instance prediction case and use that as the base to describe the multiple instance prediction cases. All the discussion in this section assumes a single instance prediction task.</p> + </li> + <li> + <p><strong>Input</strong>: Trained GNN, a single instance whose prediction is to be explained.</p> + </li> + <li> + <p><strong>Task</strong>: Identify the small subgraph and the small subset of features that explain the prediction.</p> + </li> + <li> + <p><strong>Idea</strong>: Maximize the mutual information (MI) between the GNN and the explanation by learning a <em>graph mask</em> which can be used for selecting the relevant subgraph (from the GNN’s computational graph) and features (from all layers of the GNN).</p> + </li> + <li> + <p>Computational graph of GNN (corresponding to a node) refers to the approx L-hop neighborhood of the node in the graph ie the subgraph formed by nodes and edges whose representation affected the representation of the given node.</p> + </li> +</ul> + +<h3 id="single-instance-explanations">Single-Instance Explanations</h3> + +<ul> + <li> + <p>For a node <em>v</em>, the information used to predict its label <em>y</em> is completely described by its computation graph <em>G<sub>c</sub>(v)</em> and the associated feature set <em>X<sub>c</sub>(v)</em>. The feature set includes the features of all the nodes in the computation graph.</p> + </li> + <li> + <p>When constructing the explaination, only <em>G<sub>c</sub>(v)</em> and <em>X<sub>c</sub>(v)</em> are used.</p> + </li> + <li> + <p>The task can be reformulated as identifying a subgraph <em>G<sub>S</sub></em> (subset of <em>G<sub>c</sub>(v)</em>) with associated features <em>X<sub>S</sub></em> which are important when predicting the label <em>y</em> for node <em>v</em>.</p> + </li> + <li> + <p>“Importance” is measured in terms of MI</p> + </li> +</ul> + +<p><em>MI(Y, (G<sub>S</sub>, X<sub>S</sub>)) = H(Y) - H(Y | G = G<sub>S</sub>, X = X<sub>S</sub>)</em> where <em>H</em> is the entropy and <em>Y</em> is a random variable representing the prediction.</p> + +<ul> + <li> + <p>A further constraint, <em>| G<sub>S</sub>| &lt; k</em> is imposed to obtain consise explaintations.</p> + </li> + <li> + <p>Since <em>H(Y)</em> is fixed (recall that the network has already been trained and is now being used in the inference mode), maximizing MI is equivalent to minimizing the conditional entropy <em>H(Y | G = G<sub>S</sub>, X = X<sub>S</sub>)</em></p> + </li> + <li> + <p>This is equivalent to selecting the subgraph that minimizes the uncertainty in the prediction of <em>y</em> when the computational graph is <em>G<sub>c</sub>(v)</em></p> + </li> +</ul> + +<h4 id="optimiation-process">Optimiation Process</h4> + +<ul> + <li> + <p>Given the exponentially large number of possible subgraphs, we can not directly optimize the given equation.</p> + </li> + <li> + <p>A “relaxed”-adjacency matrix (whose values are real numbers in the range 0 to 1) is introduced where each element of this fractional adjacency matrix is smaller than the corresponding element of the original adjacency matrix. Gradient descent can be performed on this adjacency matrix.</p> + </li> + <li> + <p>The “relaxed” <em>G<sub>S</sub></em> can be interpreted as a variational approximation of the subgraph distributions of <em>G<sub>c</sub>(v)</em> and the objective can be written as <em>min E<sub>G<sub>S</sub></sub>H(Y | G = G<sub>S</sub>, X = X<sub>S</sub>)</em></p> + </li> + <li> + <p>Now the paper makes a big approximation that the GNN is convex so as to leverage the Jensen inequality and push the expectation inside the entropy term to get an upper bound and then minimize that ie <em>min H(Y | G = E<sub>s</sub>[G<sub>S</sub>], X = X<sub>S</sub>)</em></p> + </li> + <li> + <p>The paper reports that the convexity approximation (along with discreteness constraint) works in practice.</p> + </li> + <li> + <p>Next, mean field approximation is used to decompose <em>P(G<sub>S</sub>)</em> as a multivariate Bernoulli distrbitution ie product of <em>A<sub>S</sub>(i, j)</em> for all <em>(i, j)</em> belonging to <em>G<sub>c</sub>(v)</em>. <em>A<sub>S</sub></em> can be optimized directly and its values represent the expectation of the Bernoulli distrbitution on wether the edge <em>e<sub>i, j</sub></em> exists.</p> + </li> + <li> + <p>Given the constraints on <em>A<sub>S</sub></em>, it is easier to learn a mask matrix <em>M</em> and optimize that such that <em>A<sub>S</sub></em> = M * A<sub>c</sub>* Additionally, the sigmod operator can be applied on <em>M</em>.</p> + </li> + <li> + <p>Once <em>M</em> is learned, only the top <em>k</em> values are retained.</p> + </li> +</ul> + +<h4 id="including-node-features-in-the-explanation">Including Node Features in the Explanation</h4> + +<ul> + <li> + <p>Similar to the previous approach, another feature mask is learned (either one for entire GNN or one per node of the GNN) and is used as a feature selector.</p> + </li> + <li> + <p>The mask could either be learned such that same set of node features (in terms of dimensions) are selected or a different set of features are selected per node. The paper uses the former as it is more straightforward.</p> + </li> + <li> + <p>Just like before, a “relaxed” mask <em>M<sub>T</sub></em> is trained to select features as <em>M<sub>T</sub> * X<sub>S</sub></em>.</p> + </li> + <li> + <p>One tricky case is where one feature is important but its value is set to 0. In the case, the value will be masked even though it should not be</p> + </li> + <li> + <p>The workaround is to use Monte Carlo (MC) estimates of marginals of the missing features. This gives a way to assign importance scores to each feature dimension and a form of reparameterization trick is used to perform end-to-end learning.</p> + </li> + <li> + <p>Masks are encouraged to be discrete by regularizing their element-wise entropy.</p> + </li> + <li> + <p>Resulting computation graph is valid as in it allows message passing towards the central node <em>v</em>.</p> + </li> +</ul> + +<h2 id="multi-instance-explanations">Multi-Instance Explanations</h2> + +<ul> + <li> + <p>Given a set of nodes (having the label say <em>y</em>), the task is to obtain a global explanation of the predictions.</p> + </li> + <li> + <p>For the given class, a prototypical reference node is chosen by computing the mean of embeddings of all the nodes in the class and then selecting the node which is closest to the mean.</p> + </li> + <li> + <p>Now, compute the important computational graph corresponding to this node and align the computational subgraphs of all the other nodes (in the given class) to reference.</p> + </li> + <li> + <p>Let <em>A*</em> be the adjacency matrix and <em>X*</em> be the feature matrix for the explanation corresponding to the reference node. Let <em>A<sub>v</sub></em> and <em>X<sub>v</sub></em> be the adjacency matrix and feature matrix of the to-ber-aligned computational graph.</p> + </li> + <li> + <p>A relaed alignment matrix <em>P</em> is optimized to align the nodes and features in the two graphs ie we minimize <em>|P<sup>T</sup>A<sub>v</sub>P - A*| + *|P<sup>T</sup>X<sub>v</sub>P - X*|</em></p> + </li> + <li> + <p>Choosing concise explanations helps in efficient graph matching.</p> + </li> + <li> + <p>For GNNs that compute attention over the entire graph, edges with low attention weight can be pruned to increase efficiency.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets</p> + + <ul> + <li> + <p>Node classification: BA-Shapes, BA-Community, Tree-Cycles, Tree-Grid</p> + </li> + <li> + <p>Graph classification: MUTAG, Reddit-Binary</p> + </li> + </ul> + </li> + <li> + <p>Baselines</p> + + <ul> + <li> + <p>GRAD - Compute the gradient of the model loss with respect to the adjacency matrix and the node features to be classified and fix the edges with the highest absolute gradient.</p> + </li> + <li> + <p>GAT - Graph Attention Network</p> + </li> + </ul> + </li> + <li> + <p>The proposed model seems to outperform the baselines both qualitatively and quantitatively. But the results should be taken with a grain of salt as only 2 baselines are considered.</p> + </li> +</ul> + + + + + To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks + + 2019-03-16T00:00:00-04:00 + /site/2019/03/16/To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks + <ul> + <li> + <p><a href="https://arxiv.org/abs/1903.05987">Link to the paper</a></p> + </li> + <li> + <p>The paper provides useful empirical advice for adapting pretrained language models for a given target task.</p> + </li> + <li> + <p>Pre-trained models considered</p> + + <ul> + <li> + <p>ELMo</p> + </li> + <li> + <p>BERT</p> + </li> + </ul> + </li> + <li> + <p>Tasks considered</p> + + <ul> + <li> + <p>Named Entity Recognition (NER) - CoNLL 2003 dataset</p> + </li> + <li> + <p>Sentiment Analysis (SA) - Stanford Sentiment Treebank (SST-2) dataset</p> + </li> + <li> + <p>Natural Language Inference (NLI) - MultiNLI and Sentences Involving Compositional Knowledge (SICK-E) dataset</p> + </li> + <li> + <p>Paraphrase Detection (PD) - Microsoft Research Paraphrase Corpus (MRPC)</p> + </li> + <li> + <p>Semantic Textual Similarity (STS) - Semantic Textual Similarity Benchmark (STS-B) and SICK-R</p> + </li> + <li> + <p>The last 3 tasks (NLI, PD, STS) are defined for sentence pairs.</p> + </li> + </ul> + </li> + <li> + <p>Adaptation Strategies</p> + + <ul> + <li> + <p>Feature Extraction</p> + + <ul> + <li> + <p>The pretrained model is only used for extracting features and its weights are kept fixed.</p> + </li> + <li> + <p>For both ELMo and BERT, the contextual representation of the words from all the layers are extracted.</p> + </li> + <li> + <p>A weighted combination of these layers is used as an input to the task-specific model.</p> + </li> + <li> + <p>Task-specific models</p> + + <ul> + <li> + <p>NER - BiLSTM with CRF layer</p> + </li> + <li> + <p>SA - bi-attentive classification network</p> + </li> + <li> + <p>NLI, PD, STS - <a href="https://arxiv.org/abs/1609.06038">Enhanced Sequential Inference Model (ESIM)</a></p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Fine-tuning</p> + + <ul> + <li> + <p>The pretrained model is finetuned on the target task.</p> + </li> + <li> + <p>Task-specific models for ELMO</p> + + <ul> + <li> + <p>NER - CRF on top of LSTM states</p> + </li> + <li> + <p>SA - Max-pool over the language model states followed by a softmax layer</p> + </li> + <li> + <p>NLI, PD, STS - cross sentence bi-attention between the language model states followed by pooling and softmax layer.</p> + </li> + </ul> + </li> + <li> + <p>Task-specific models for BERT</p> + + <ul> + <li> + <p>NER - Extract representation of the first-word piece of each token followed by the softmax layer</p> + </li> + <li> + <p>SA, NLI, PD, STS - standard BERT training</p> + </li> + </ul> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Main observations</p> + + <ul> + <li> + <p>Feature extraction and fine-tuning have comparable performance in most cases unless the two tasks are highly similar(fine-tuning is better) or highly dissimilar (feature extraction is better).</p> + </li> + <li> + <p>For ELMo, feature extraction consistently outperforms fine-tuning for the sentence pair tasks (NLI, PD, STS). The reverse trend is observed for BERT with fine-tuning being better on sentence pair tasks.</p> + </li> + <li> + <p>Adding extra parameters is helpful for feature extraction but not fine-tuning.</p> + </li> + <li> + <p>ELMo fine-tuning requires careful tuning and other tricks like triangular learning rates, gradual unfreezing and discriminative fine-tuning.</p> + </li> + <li> + <p>For the tasks considered, there is no correlation observed between the distance of the source and target domains and adaptation performance.</p> + </li> + <li> + <p>Training a diagnostic classifier (on the intermediate representations) suggests that fine-tuning improves the performance of the classifier at all the intermediate layers (which is sort of expected).</p> + </li> + <li> + <p>In terms of mutual information estimates, fine-tuned representations have a much higher mutual information as compared to the feature extraction based representations.</p> + </li> + <li> + <p>Knowledge for single sentence tasks seems to be mostly concentrated in the last layers while for pair classification tasks, the knowledge seems gradually build un in the intermediate layers, all the way up to the last layer.</p> + </li> + </ul> + </li> +</ul> + + + + + Model Primitive Hierarchical Lifelong Reinforcement Learning + + 2019-03-12T00:00:00-04:00 + /site/2019/03/12/Model Primitive Hierarchical Lifelong Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a framework that uses diverse suboptimal world models that can be used to break complex policies into simpler and modular sub-policies.</p> + </li> + <li> + <p>Given a task, both the sub-policies and the controller are simultaneously learned in a bottom-up manner.</p> + </li> + <li> + <p>The framework is called as Model Primitive Hierarchical Reinforcement Learning (MPHRL).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1903.01567">Link to the paper</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>Instead of learning a single transition model of the environment (aka <em>world model</em>) that can model the transitions very well, it is sufficient to learn several (say <em>k</em>) suboptimal models (aka <em>model primitives</em>).</p> + </li> + <li> + <p>Each <em>model primitive</em> will be good in only a small part of the state space (aka <em>region of specialization</em>).</p> + </li> + <li> + <p>These <em>model primitives</em> can then be used to train a gating mechanism for selecting sub-policies to solve a given task.</p> + </li> + <li> + <p>Since these <em>model primitives</em> are sub-optimal, they are not directly used with model-based RL but are used to obtain useful functional decompositions and sub-policies are trained with model-free approaches.</p> + </li> +</ul> + +<h2 id="single-task-learning">Single Task Learning</h2> + +<ul> + <li> + <p>A gating controller is trained to choose the sub-policy whose <em>model primitive</em> makes the best prediction.</p> + </li> + <li> + <p>This requires modeling <em>p(M<sub>k</sub> | s<sub>t</sub>, a<sub>t</sub>, s<sub>t+1</sub>)</em> where <em>p(M<sub>k</sub>)</em> denotes the probability of selecting the <em>k<sup>th</sup> model primitive</em>. This is hard to compute as the system does not have access to <em>s<sub>t+1</sub></em> and <em>a<sub>t</sub></em> at time <em>t</em> before it has choosen the sub-policy.</p> + </li> + <li> + <p>Properly marginalizing <em>s<sub>t+1</sub></em> and <em>a<sub>t</sub></em> would require expensive MC sampling. Hence an approximation is used and the gating controller is modeled as a categorical distribution - to produce <em>p(M<sub>k</sub> | s<sub>t</sub>)</em>. This is trained via a conditional cross entropy loss where the ground truth distribution is obtained from transitions sampled in a rollout.</p> + </li> + <li> + <p>The paper notes that technique is biased but reports that it still works for the downstream tasks.</p> + </li> + <li> + <p>The gating controller composes the sub-policies as a mixture of Gaussians.</p> + </li> + <li> + <p>For learning, PPO algorithm is used with each <em>model primitives</em> gradient weighted by the probability from the gating controller.</p> + </li> +</ul> + +<h2 id="lifelong-learning">Lifelong Learning</h2> + +<ul> + <li>Different tasks could share common subtasks but may require a different composition of subtasks. Hence, the learned sub-policies are transferred across tasks but not the gating controller or the baseline estimator (from PPO).</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Domains:</p> + + <ul> + <li> + <p>Mujoco ant navigating different mazes.</p> + </li> + <li> + <p>Stacker arm picking up and placing different boxes.</p> + </li> + </ul> + </li> + <li> + <p>Implementation Details:</p> + + <ul> + <li> + <p>Gaussian subpolicies</p> + </li> + <li> + <p>PPO as the baseline</p> + </li> + <li> + <p>Model primitives are hand-crafted using the true next state provided by the environment simulator.</p> + </li> + </ul> + </li> + <li> + <p>Single Task</p> + + <ul> + <li> + <p>Only maze task is considered with the start position (of the ant) and the goal position is fixed.</p> + </li> + <li> + <p>Observation includes distance from the goal.</p> + </li> + <li> + <p>Forcing the agent to decompose the problem, when a more direct solution may be available, causes the sample complexity to increase on one task.</p> + </li> + </ul> + </li> + <li> + <p>Lifelong Learning</p> + + <ul> + <li> + <p>Maze</p> + + <ul> + <li> + <p>10 random Mujoco ant mazes used as the task distribution.</p> + </li> + <li> + <p>MPHRL takes almost twice the number of steps (as compared to PPO baseline) to solve the first task but this cost gets amortized over the distribution and the model takes half the number of steps as compared to the baseline (summed over the 10 tasks).</p> + </li> + </ul> + </li> + <li> + <p>Pick and Place</p> + + <ul> + <li> + <p>8 Pick and Place tasks are created with max 3 goal locations.</p> + </li> + <li> + <p>Observation includes the position of the goal.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>Ablations</p> + + <ul> + <li> + <p>Overlapping <em>model primitives</em> can degrade the performance (to some extent). Similarly, the performance suffers when redundant primitives are introduced indicating that the gating mechanism is not very robust.</p> + </li> + <li> + <p>Sub-policies could quickly adapt to the previous tasks (on which they were trained initially) despite being finetuned on subsequent tasks.</p> + </li> + <li> + <p>The order of tasks (in the 10-Mazz task) does not degrage the performance.</p> + </li> + <li> + <p>Transfering the gating controller leads to negative transfer.</p> + </li> + </ul> + </li> + <li> + <p>Notes</p> + + <ul> + <li>I think the biggest strength of the work is that accurate dynamics model are not needed (which are hard to train anyways!) through the experimental results are not conclusive given the limited number of domains on which the approach is tested.</li> + </ul> + </li> +</ul> + + + + + TuckER - Tensor Factorization for Knowledge Graph Completion + + 2019-02-19T00:00:00-05:00 + /site/2019/02/19/TuckER-Tensor Factorization for Knowledge Graph Completion + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>TuckER is a simple, yet powerful linear model that uses Tucker decomposition for the task of link prediction in knowledge graphs.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1901.09590">Paper</a></p> + </li> + <li> + <p><a href="https://github.com/ibalazevic/TuckER">Implementation</a></p> + </li> +</ul> + +<h2 id="knowledge-graph-as-a-tensor">Knowledge Graph as a Tensor</h2> + +<ul> + <li> + <p>Let E be the set of all the entities and R be the set of all the relations in a given knowledge graph (KG).</p> + </li> + <li> + <p>The KG can be represented as a list of triples of the form (source entity, relation, object entity) or (e<sub>s</sub>, r, e<sub>o</sub>).</p> + </li> + <li> + <p>The list of triples can be represented as a third-order tensor (of binary values) where each element corresponds to a triple and each element’s value corresponds to ether that element is present in the KG or not.</p> + </li> + <li> + <p>The link prediction task can be formulated as - given a set of all triples, learn a scoring function that assigns a score to each triple. The score indicates whether the triple is actually present in the KG or not.</p> + </li> +</ul> + +<h2 id="tucker-decomposition">TuckER Decomposition</h2> + +<ul> + <li> + <p>Tucker decomposition factorizes a tensor into a set of factor matrices and a smaller core tensor.</p> + </li> + <li> + <p>In the specific case of three-mode tensors (alternate representation of a KG), the given original tensor <strong>X</strong> (of shape <em>IxJxK</em>) can be factorized into a core tensor <strong>W</strong> (of shape <em>PxQxR</em>) and 3 factor matrics - <strong>A</strong> (of shape <em>IxP</em>), <strong>B</strong> (of shape <em>JxQ</em>) and <strong>C</strong> (of shape <em>KxR</em>) such that <strong>X</strong> is approximately <strong>W</strong> x<sub>1</sub> <strong>A</strong> x<sub>2</sub> <strong>B</strong> x<sub>3</sub> <strong>C</strong>, where X<sub>n</sub> denotes the tensor product along the nth mode.</p> + </li> + <li> + <p>Generally, <em>P, Q, R</em> are smaller than <em>I, J K</em> (respectively) and <strong>W</strong> can be seen as a compressed version of <strong>X</strong>.</p> + </li> +</ul> + +<h2 id="tucker-decomposition-for-link-prediction">TuckER Decomposition for Link Prediction</h2> + +<ul> + <li> + <p>Two embedding matrics are used for embedding the entities and the relations respectively.</p> + </li> + <li> + <p>Entity embedding matrix <strong>E</strong> is shared for both subject and the object ie <strong>E</strong> = <strong>A</strong> = <strong>B</strong>.</p> + </li> + <li> + <p>The scoring function is gives as <strong>W</strong> x<sub>1</sub> <strong>e<sub>s</sub></strong> x<sub>2</sub> <strong>w<sub>r</sub></strong> x<sub>3</sub> <strong>e<sub>0</sub></strong> where <strong>e<sub>s</sub></strong>, <strong>w<sub>r</sub></strong> and <strong>e<sub>o</sub></strong> are the embedding vectors corresonding to e<sub>s</sub>, e<sub>r</sub> and e<sub>o</sub> respectively. Note that both the core tensor and the factor matrices are to be learnt.</p> + </li> + <li> + <p>Model is trained with the standard negative log-likelihood loss given as (for one triple): y * log(p) + (1-y) * log(1-p)</p> + </li> + <li> + <p>To speed up training and increase accuracy, 1-N scoring is used. A given (e<sub>s</sub>, r) is simultaneously scored for all the entities using the local-closed world assumption (knowledge graph is only locally complete).</p> + </li> + <li> + <p>Handling asymmetric relations is straightforward by learning a relation embedding alongside a relation-agnostic core tensor which enables knowledge sharing across relations.</p> + </li> +</ul> + +<h2 id="theoretical-analysis">Theoretical Analysis</h2> + +<ul> + <li> + <p>One important consideration would be the expressive power of TuckER models, especially in relation to other models like ComplEx and SimplE.</p> + </li> + <li> + <p>It can be shown the TuckER is fully expressive ie give any ground truth over E and R, there exists a TuckER model which can perfectly represent the data - using 1-hot entity and relation embedding.</p> + </li> + <li> + <p>For full expressiveness, dimensionality of entity (relation) is n<sub>E</sub> (n<sub>R</sub>) where n<sub>E</sub> (n<sub>R</sub>) are the number of entities (relations). In comparsion, the required dimensionality for ComplEx is n<sub>E</sub> * n<sub>R</sub> (for both entity and relations) and for SimplE, it is min(<sub>E</sub> * n<sub>R</sub>, number of facts + 1) (for both entity and relations).</p> + </li> + <li> + <p>Many existing models like RESCAL, DistMult, ComplEx, SimplE etc can be seen as special cases of TuckER.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="datasets">Datasets</h3> + +<ul> + <li> + <p>FB15k, FB15k-237, WN18, WN18RR</p> + </li> + <li> + <p>The max number of entities is around 41K and max number of relations is around 1.3K</p> + </li> +</ul> + +<h3 id="implementation">Implementation</h3> + +<ul> + <li>BatchNorm, Dropout and Learning rate decay are used.</li> +</ul> + +<h3 id="metrics">Metrics</h3> + +<ul> + <li> + <p>Mean Reciprocal Rank (MRR) - the average of the inverse of mean rank assigned to the true triple overall n<sub>e</sub> generated triples.</p> + </li> + <li> + <p>hits@k (k = 1, 3, 10) - percentage of times the true triple is ranked in the top k of the n<sub>e</sub> generated triples.</p> + </li> + <li> + <p>Higher is better for both the metrics.</p> + </li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li> + <p>TuckER outperforms all the baseline models on all but one task.</p> + </li> + <li> + <p>Dropout is an important factor with higher dropout rates (0, 3, 0.4, 0.5) needed for datasets with fewer training examples per relation (hence more prone to overfitting).</p> + </li> + <li> + <p>TuckER improves performance more significantly when the number of relations is large.</p> + </li> + <li> + <p>Even with lower embedding dimensions, TuckER’s performance does not deteriorate as much as other models.</p> + </li> +</ul> + + + + + Linguistic Knowledge as Memory for Recurrent Neural Networks + + 2019-02-05T00:00:00-05:00 + /site/2019/02/05/Linguistic Knowledge as Memory for Recurrent Neural Networks + <ul> + <li> + <p><a href="https://arxiv.org/abs/1703.02620">Link to the paper</a></p> + </li> + <li> + <p>Training RNNs to model long term dependencies is difficult but in some cases, the information about dependencies between elements (of the sequence) may be present in the form of symbolic knowledge.</p> + </li> + <li> + <p>For example, when encoding sentences, coreference, and hypernymy relations can be extracted between tokens.</p> + </li> + <li> + <p>These elements(tokens) can be connected with each other with different kind of edges resulting in the graph data structure.</p> + </li> + <li> + <p>One approach could be to model this knowledge(encoded in the graph) using a graph neural network (GNN).</p> + </li> + <li> + <p>The authors prefer to encode the information into 2 DAGs (via topological sorting) as training the GNN could add some extra overhead.</p> + </li> + <li> + <p>This results into the Memory as Acyclic Graph Encoding RNN (MAGE-RNN) architecture. Its GRU version is referred to as MAGE-GRU.</p> + </li> + <li> + <p>Given an input sequence of tokens [x<sub>1</sub>, x<sub>2</sub>, …, x<sub>T</sub>] and information about which tokens relate to other tokens, a graph G is constructed with different (possibly typed) edges.</p> + </li> + <li> + <p>Given the graph <em>G</em>, two DFS orderings are computed - forward DFS and backward DFS.</p> + </li> + <li> + <p>MAGE-RNN uses separate networks for accessing the forward and backward DFS orders.</p> + </li> + <li> + <p>A separate hidden state is maintained for each entity type to separate memory content from addressing.</p> + </li> + <li> + <p>For any DFS order (forward or backward), the representation at time <em>t</em> is given as the concatenation of representation of different edge types at that time.</p> + </li> + <li> + <p>The hidden states (for different edge types at time t) are updated in the topological order using the current state of all incoming edges at x<sub>t</sub>.</p> + </li> + <li> + <p>The representation of the DFS order is given as the sequence of all the previous representations.</p> + </li> + <li> + <p>In some cases, elements across multiple sequences could be related to each other. In that case, the graph is decomposed into a collection of DAGs and use MAGE-GRU on the DAGs by taking one random permutation of the sequences and decomposing it into the forward and the backward graphs.</p> + </li> + <li> + <p>The model is evaluated on the task of text comprehension with coreference on bAbi dataset (story based QA), LAMBADA dataset (broad context language modeling) and CNN dataset (cloze-style QA).</p> + </li> + <li> + <p>MAGE-GRU was used as a replacement for GRU units in bi-directional GRUs and GA-Reader architecture.</p> + </li> + <li> + <p>DAG-RNN and shared version of MAGE-GRU (with shared edge types) are the other baselines.</p> + </li> + <li> + <p>For all the cases, the model with MAGE-GRU works the best.</p> + </li> +</ul> + + + + + Diversity is All You Need - Learning Skills without a Reward Function + + 2019-01-29T00:00:00-05:00 + /site/2019/01/29/Diversity is All You Need - Learning Skills without a Reward Function + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes an approach to learn useful skills without a reward function by maximizing an information theoretic objective by using a maximum entropy policy.</p> + </li> + <li> + <p>Skills are defined as latent-conditioned policies that alter the state of the environment in a consistent way.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1802.06070">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/ben-eysenbach/sac">Link to the code</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li>Unsupervised “exploration” stage followed by supervised stage.</li> +</ul> + +<h2 id="desirable-qualities-of-skills">Desirable Qualities of Skills</h2> + +<ul> + <li> + <p>Skills should dictate the states that the agent visits. Different skills should visit different states to be distinguishable.</p> + </li> + <li> + <p>States (not actions) should be used to distinguish between skills as not all actions change the state (for the outside observer).</p> + </li> + <li> + <p>Skills are encouraged to be diverse and “exploratory” by learning skills that act randomly (have high entropy).</p> + </li> +</ul> + +<h2 id="loss-formulation">Loss Formulation</h2> + +<ul> + <li> + <p>(S, A) - state and action</p> + </li> + <li> + <p>z ~ p(z) - latent variable to condition the policy.</p> + </li> + <li> + <p>Skill - policy conditioned on a fixed z.</p> + </li> + <li> + <p>Objective is to maximize the mutual information between skill and state (MI(A; Z)) ie skill should control which state is visited or the skill should be inferrable from the state visited.</p> + </li> + <li> + <p>Simultaneously minimize the mutual information between skills and actions given the state to ensure that the state (and not the action) is used to distinguish the skills.</p> + </li> + <li> + <p>Maximize the entropy of the mixture of policies (p(z) and all the skills).</p> + </li> +</ul> + +<h2 id="implementation">Implementation</h2> + +<ul> + <li> + <p>Policy π(a | s, z)</p> + </li> + <li> + <p>Task reward replaced by the pseduoreward logq<sub>φ</sub>(z | s) - log(p(z)).</p> + </li> + <li> + <p>During unsupervised training, z is sampled at the start of the episode and then not changed during the episode.</p> + </li> + <li> + <p>Learning agent gets rewards for visiting the states that are easy to discriminate while the discriminator updated to correctly predict z from the states visited.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<h3 id="analysis-of-learned-skills">Analysis of Learned Skills</h3> + +<ul> + <li> + <p>The agent learns a diverse set of primitive behaviors for all tasks ranging from 2 DoF to 111 DoF.</p> + </li> + <li> + <p>for inverted pendulum and mountain car, the skills become increasingly diverse throughout training.</p> + </li> + <li> + <p>Use of uniform prior, in place of a learned prior, for p(z) allows for discovery of more diverse skills.</p> + </li> + <li> + <p>The proposed approach can be used as a pretraining technique where the best-performing primitives (from unsupervised training) can be finetuned with the task-specific rewards.</p> + </li> + <li> + <p>The discovered skills can be used for hierarchical RL by learning a meta-policy(which chooses the skill to execute for k steps).</p> + </li> + <li> + <p>Modifying the discriminator in the proposed formulation can be used to bias DIAYN towards discovering a particular type of policies. This provides a mechanism for incorporating “supervision” in the learning setup.</p> + </li> + <li> + <p>The “discovered” primitives can also be used for imitation learning.</p> + </li> +</ul> + + + + + Modular meta-learning + + 2019-01-22T00:00:00-05:00 + /site/2019/01/22/Modular meta-learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes an approach for learning neural networks (modules) that can be combined in different ways to solve different tasks (combinatorial generalization).</p> + </li> + <li> + <p>The proposed model is called as BOUNCEGRAD.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1806.10166">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/FerranAlet/modular-metalearning">Link to the code</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Focuses on supervised learning.</p> + </li> + <li> + <p>Task distribution <em>p(T)</em>.</p> + </li> + <li> + <p>Each task is a joint distribution <em>p<sub>T</sub>(x, y)</em> over <em>(x, y)</em> data pairs.</p> + </li> + <li> + <p>Given data from <em>m</em> meta-training tasks, and a meta-test task, find a hypothesis <em>h</em> which performs well on the unseen data drawn from the meta-test task.</p> + </li> +</ul> + +<h2 id="structured-hypothesis">Structured Hypothesis</h2> + +<ul> + <li> + <p>Given a compositional scheme <em>C</em>, a set of modules <em>F<sub>1</sub>, …, F<sub>k</sub></em> (represented as a whole by <em>F</em>) and the set of their respective parameters θ<sub>1</sub>, …, θ<sub>k</sub> (represented as a whole by θ), <em>(C, F, θ)</em> represents the set of possible functional input-output mappings. These mappings form the hypothesis space.</p> + </li> + <li> + <p>A structured hypothesis model is specified by what modules to use and their parametric forms (but not the values).</p> + </li> +</ul> + +<h3 id="examples-of-compositional-schemes">Examples of compositional schemes</h3> + +<ul> + <li> + <p>Choosing a single module for the task at hand.</p> + </li> + <li> + <p>Fixed compositional structure but different modules selected every time.</p> + </li> + <li> + <p>Weight ensemble (maybe using attention mechanism)</p> + </li> + <li> + <p>General function composition tree</p> + </li> +</ul> + +<h3 id="phases">Phases</h3> + +<ul> + <li> + <p>Offline Meta Learning Phase:</p> + + <ul> + <li> + <p>Take training and validation dataset for the first <em>k</em> tasks and generate a parameterization for each module <em>θ<sub>1</sub>, …, θ<sub>k</sub></em>.</p> + </li> + <li> + <p>The hypothesis (or composition) to use comes from the online meta-test learning phase.</p> + </li> + <li> + <p>In this stage, find the best θ given a structure.</p> + </li> + </ul> + </li> + <li> + <p>Online Meta-test Learning Phase</p> + + <ul> + <li> + <p>Given a hypothesis space and θ, the output is a compositional form (or hypothesis) that specifies how to compose the models.</p> + </li> + <li> + <p>In this stage, find the best structure, given a hypothesis space and θ.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="learning-algorithm">Learning Algorithm</h2> + +<ul> + <li> + <p>During Meta-test learning phase, simulated annealing is used to find the optimal structure, with temperature <em>T</em> decreased over time.</p> + </li> + <li> + <p>During meta-learning phrase, the actual objective function is replaced by a surrogate, smooth objective function (during the search step) to avoid local minima.</p> + </li> + <li> + <p>Once a structure has been picked, any gradient descent based approach can be used to optimize the modules.</p> + </li> + <li> + <p>Basically the state of optimization process comprises of the parameters and the temperature. Together, they are used to induce a distribution over the structures. Given a structure, θ is optimized and <em>T</em> is annealed over time.</p> + </li> + <li> + <p>The learning procedure can be improved upon by performing parameter tuning during the online (meta-test learning) phase as well. the resulting approach is referred to as MOMA - MOdular MAml.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="approaches">Approaches</h3> + +<ul> + <li> + <p>Pooled - Single network using combined data of all the tasks.</p> + </li> + <li> + <p>MAML - Single network using MAML</p> + </li> + <li> + <p>BOUNCEGRAD - Modular Network without MAML adaptation in online learning.</p> + </li> + <li> + <p>MOMA - BOUNCEGRAD with MAML adaptation in online learning.</p> + </li> +</ul> + +<h3 id="domains">Domains</h3> + +<h4 id="simple-functional-relationships">Simple Functional Relationships</h4> + +<ul> + <li> + <p>Sine-function prediction problem</p> + </li> + <li> + <p>In general, MOMA outperforms other models.</p> + </li> + <li> + <p>With a small amount of online training data, BOUNCEGRAD outperforms other models as it has a better structural prior.</p> + </li> +</ul> + +<h4 id="predicting-next-frame-of-a-kinematic-skeleton-motion-capture-data">Predicting next frame of a kinematic skeleton (motion capture data)</h4> + +<ul> + <li> + <p>11 different objects (with different shapes) on 4 surfaces with different friction properties.</p> + </li> + <li> + <p>2 meta-learning scenarios are considered. In the first case, the object-surface combination in the test case was present in some meta-training tasks and in the other case, it was not present.</p> + </li> + <li> + <p>For previously seen combinations, MOMA performs the best followed by BOUNCEGRAD and MAML.</p> + </li> + <li> + <p>For unseen combinations, all the 3 are equally good.</p> + </li> + <li> + <p>Compositional scheme is the attention mechanism.</p> + </li> + <li> + <p>An interesting result is that the modules seem to specialize (and activate more often) based on the shape of the object.</p> + </li> +</ul> + +<h3 id="predicting-next-frame-of-a-kinematic-selection-using-motion-capture-data">Predicting next frame of a kinematic selection (using motion capture data)</h3> + +<ul> + <li> + <p>Composition Structure - generating kinematics subtrees for each body part (2 legs, 2 arms, 2 torsi).</p> + </li> + <li> + <p>Again 2 setups are used - one where all activities in the training and the meta-test task are shared while the other setup where the activities are not shared.</p> + </li> + <li> + <p>For known activities MOMA and BOUNCEGRAD perform the best while for unknown activities, MOMS performs the best.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li> + <p>While the approach is interesting, maybe a more suitable set of tasks (from the point of composition) would be more convincing.</p> + </li> + <li> + <p>It would be useful to see the computational tradeoff between MAML, BOUNCEGRAD, and MOMA.</p> + </li> +</ul> + + + + + Hierarchical RL Using an Ensemble of Proprioceptive Periodic Policies + + 2019-01-15T00:00:00-05:00 + /site/2019/01/15/Hierarchical RL Using an Ensemble of Proprioceptive Periodic Policies + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a simple and robust approach for hierarchically training an agent in the sparse reward setup.</p> + </li> + <li> + <p>The broad idea is to train low-level primitives that are sufficiently diverse (so that they can be composed for solving higher level tasks) and to train a high level primitive that learns to combine these primitives for any given downstream task.</p> + </li> + <li> + <p><a href="https://openreview.net/forum?id=SJz1x20cFQ">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li>The state can be divided into two components: proprioceptive states s<sup>p</sup> (measurement of agent’s own body that can be directly controlled by the agent) and the external states s<sup>e</sup>/</li> +</ul> + +<h3 id="low-level-policy-training">Low-Level Policy Training</h3> + +<ul> + <li> + <p>Low-level policies should be:</p> + + <ul> + <li>Diverse: should cover all the skills that the agent might have to perform.</li> + <li>Effective: can make significant changes to the environment.</li> + <li>Controllable: easy for high-level policies to use and control</li> + </ul> + </li> + <li> + <p>For the low-level policy, the per-time step reward is directly proportional to change in the external state. The same reward is used for all the agents and environments(except regulated with environment specific controls and survival rewards).</p> + </li> +</ul> + +<h3 id="phase-conditioned-policies">Phase conditioned policies</h3> + +<ul> + <li> + <p>Good movement policies are expected to be at least roughly periodic and phase input (or time index) is used to achieve periodicity.</p> + </li> + <li> + <p>Phase conditioned policy (=f(s<sup>p</sup>, φ)) where φ = {0, 1, …, k-1} is the phase index.</p> + </li> + <li> + <p>At each timestep <em>t</em>, the model receives observation s<sup>p</sup> and phase index φ = t%k. The phase index is represented by a vector b<sub>φ</sub>.</p> + </li> + <li> + <p>For phase conditioned policies, the agent state and actions are encouraged to be cyclic with the help of a cyclic loss.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Environments: Ant and Humanoid from Mujoco.</p> + </li> + <li> + <p>Low-level control:</p> + + <ul> + <li>Using phase-conditioning is helpful when training low-level primitives.</li> + </ul> + </li> + <li> + <p>High-level control:</p> + + <ul> + <li> + <p>Cross Maze Environment with fixed goals</p> + + <ul> + <li> + <p>3 goals along 3 paths</p> + </li> + <li> + <p>Proposed method converges faster and to a smaller final distance to the goal showing that it is both efficient and consistent (with smaller variance across random seeds).</p> + </li> + </ul> + </li> + <li> + <p>Random Goal Maze</p> + + <ul> + <li> + <p>The goal is randomly drawn from a set of goals.</p> + </li> + <li> + <p>“Cross” (shaped) maze and “skull” (shaped) mazes are considered.</p> + </li> + <li> + <p>Even with velocity rewards and pretraining on low-level objectives (which can be thought of as exploration bonuses), the baseline fails to get close to the goal locations while the proposed model reach the goal most of the times.</p> + </li> + <li> + <p>The main results are reported using PPO though repeating the experiments with A2C and DQN show that the idea is fairly robust.</p> + </li> + <li> + <p>The paper reported that in their experiments, finetuning the lower level primitives did not help much though it might not be the case of other environments.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + + + + + Efficient Lifelong Learning with A-GEM + + 2019-01-08T00:00:00-05:00 + /site/2019/01/08/Efficient Lifelong Learning with A-GEM + <h2 id="contributions">Contributions</h2> + +<ul> + <li> + <p>A new (and more realistic) evaluation protocol for lifelong learning where each data point is observed just once and a disjoint set of tasks are used for training and validation.</p> + </li> + <li> + <p>A new metric that focuses on the efficiency of the models - in terms of sample complexity and computational (and memory) costs.</p> + </li> + <li> + <p>Modification of <a href="https://arxiv.org/abs/1706.08840">Gradient Episodic Memory ie GEM</a> which reduces the computational overhead of GEM without compromising on the results.</p> + </li> + <li> + <p>Empirical validation that using task descriptors help lifelong learning models and improve their few-shot learning capabilities.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1812.00420">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/facebookresearch/agem/">Link to the code</a></p> + </li> +</ul> + +<h2 id="learning-protocol">Learning Protocol</h2> + +<ul> + <li> + <p>Two group of datasets - one for training and evaluation (D<sup>EV</sup>) and other for cross validation (D<sup>CV</sup>).</p> + </li> + <li> + <p>Data can be sampled multiple times for cross-validation dataset but only once from the training dataset.</p> + </li> + <li> + <p>Each group of dataset (say D<sup>EV</sup> or D<sup>CV</sup>) is a list of task-specific datasets D<sub>k</sub> (k is the task index).</p> + </li> + <li> + <p>Each sample in D<sub>k</sub> is of the form (x, t, y) where x is the data, t is the task descriptor and y is the output.</p> + </li> + <li> + <p>D<sub>k</sub> contains B<sup>k</sup> minibatches of data.</p> + </li> +</ul> + +<h2 id="metrics">Metrics</h2> + +<h3 id="accuracy">Accuracy</h3> + +<ul> + <li> + <p>a<sub>k,i,j</sub> = accuracy on test task j after training on ith minibatch of training task k.</p> + </li> + <li> + <p>A<sub>k</sub> = mean over all j = 1 to k (a<sub>k, B<sub>k</sub>, j</sub>) ie train the model on data for task k and then test it on all the tasks.</p> + </li> +</ul> + +<h3 id="forgetting-measure">Forgetting Measure</h3> + +<ul> + <li> + <p>f<sub>j</sub><sup>k</sup> = forgetting on task j after training on all minibatches upto task k.</p> + </li> + <li> + <p>f<sub>j</sub><sup>k</sup> = max over all l = 1 to k-1 (a<sub>l, B<sub>l</sub>j</sub> - a<sub>k, B<sub>k</sub>j</sub>)</p> + </li> + <li> + <p>Forgetting = F<sub>k</sub> = mean over all j = 1 to k-1 (f<sub>j</sub><sup>k</sup>)</p> + </li> +</ul> + +<h3 id="lca---learning-curve-area">LCA - Learning Curve Area</h3> + +<ul> + <li> + <p>Z<sub>b</sub> = average b shot performance where b is the minibatch number.</p> + </li> + <li> + <p>Z<sub>b</sub> = mean over all k = 0 to T (a<sub>k, b, k</sub>)</p> + </li> + <li> + <p>LCA<sub>β</sub> = mean over all b = 0 to β (Z<sub>b</sub>)</p> + </li> + <li> + <p>One special case is LCA<sub>0</sub> which is the forward transfer performance or performance on the unseen task.</p> + </li> + <li> + <p>In experiments, β is kept small as we want the model to learn from few examples.</p> + </li> +</ul> + +<h2 id="model">Model</h2> + +<ul> + <li> + <p>GEM has been shown to be very effective in single epoch setting but introduces a very high computational overhead.</p> + </li> + <li> + <p>Average GEM (AGEM) reduces this overhead by sampling (and using) only some examples from the episodic memory instead of using all the examples.</p> + </li> + <li> + <p>While GEM provides better guarantees in terms of worst-case forgetting, AGEM provides better guarantees in terms of average accuracy.</p> + </li> +</ul> + +<h2 id="joint-embedding-model-using-compositional-task-descriptors">Joint Embedding Model Using Compositional Task Descriptors</h2> + +<ul> + <li> + <p>Compositional Task Descriptors are used to speed training on the subsequent tasks.</p> + </li> + <li> + <p>A matrix specifying the attribute value of objects (to be recognized in the task) are used.</p> + </li> + <li> + <p>A joint-embedding space between image features and attribute embeddings is learned.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="datasets">Datasets</h3> + +<ul> + <li> + <p><a href="https://arxiv.org/abs/1612.00796">Permuted MNIST</a></p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1703.04200">Split CIFAR</a></p> + </li> + <li> + <p><a href="http://www.vision.caltech.edu/visipedia/CUB-200-2011.html">Split CUB</a></p> + </li> + <li> + <p><a href="http://cvml.ist.ac.at/papers/lampert-cvpr2009.pdf">Split AWA</a></p> + </li> +</ul> + +<h3 id="setup">Setup</h3> + +<ul> + <li> + <p>Integer task descriptors for MNIST and CIFAR and class attributes as descriptors for CUB and AWA</p> + </li> + <li> + <p>Baselines include <a href="https://arxiv.org/abs/1706.08840">GEM</a>, <a href="https://arxiv.org/abs/1611.07725">iCaRL</a>, <a href="https://arxiv.org/pdf/1612.00796.pdf">Elastic Weight Consolidation</a>, <a href="https://arxiv.org/abs/1606.04671">Progressive Neural Networks</a> etc.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>AGEM outperforms other models on all the datasets expect MNIST where the Progressive Neural Networks lead. One reason could be that MNIST has a large number of training examples per task. But Progressive Neural Networks lead to bad utilization of capacity.</p> + </li> + <li> + <p>While AGEM and GEM have similar performance, GEM has a much higher computational and memory overhead.</p> + </li> + <li> + <p>Use of task descriptors improves the accuracy for all the models.</p> + </li> + <li> + <p>It seems that AGEM offers a good tradeoff between average accuracy performance and efficiency - in terms of sample efficiency, memory requirements and computational costs.</p> + </li> +</ul> + + + + + Pre-training Graph Neural Networks with Kernels + + 2019-01-02T00:00:00-05:00 + /site/2019/01/02/Pre-training Graph Neural Networks with Kernels + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes a pretraining technique that can be used with the <a href="https://shagunsodhani.in/papers-I-read/Neural-Message-Passing-for-Quantum-Chemistry">GNN</a> architecture for learning graph representation as induced by powerful graph kernels.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1811.06930">Paper</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>Graph Kernel methods can learn powerful representations of the input graphs but the learned representation is implicit as the kernel function actually computes the dot product between the representations.</p> + </li> + <li> + <p>GNNs are flexible and powerful in terms of the representations they can learn but they can easily overfit if a large amount of training data is not available as is commonly the case of graphs.</p> + </li> + <li> + <p>Kernel methods can be used to learn an unsupervised graph representation that can be finetuned using the GNN architectures for the supervised tasks.</p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>Given a dataset of graphs <em>g<sub>1</sub>, g<sub>2</sub>, …, g<sub>n</sub></em>, use a relevant kernel function to compute <em>k(g<sub>i</sub>, g<sub>j</sub>)</em> for all pairs of graphs.</p> + </li> + <li> + <p>A siamese network is used to encode the pair of graphs into representations <em>f(g<sub>i</sub>)</em> and <em>f(g<sub>j</sub>)</em> such that <em>dot(f(g<sub>i</sub>), f(g<sub>j</sub>))</em> equals <em>k(g<sub>i</sub>, g<sub>j</sub>)</em>.</p> + </li> + <li> + <p>The function <em>f</em> is trained to learn the compressed representation of kernel’s feature space.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="datasets">Datasets</h3> + +<ul> + <li>Biological node-labeled graphs representing chemical compounds - MUTAG, PTC, NCI1</li> +</ul> + +<h3 id="baselines">Baselines</h3> + +<ul> + <li><a href="https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf">DGCNN</a></li> + <li>Graphlet Kernel (GK)</li> + <li>Random Walk Kernel</li> + <li>Propogation Kernel</li> + <li>Weisfeiler-Lehman subtree kernel (WL)</li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li> + <p>Pretraining uses the WL kernel</p> + </li> + <li> + <p>Pretrained model performs better than the baselines for 2 datasets but lags behind WL method (which was used for pretraining) for the NCI1 dataset.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li>The idea is straightforward and intuitive. In general, this kind of pretraining should help the downstream model. It would be interesting to try it on more datasets/kernels/GNNs so that more conclusive results can be obtained.</li> +</ul> + + + + + Smooth Loss Functions for Deep Top-k Classification + + 2018-12-25T00:00:00-05:00 + /site/2018/12/25/Smooth Loss Functions for Deep Top-k Classification + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>For top-k classification tasks, cross entropy is widely used as the learning objective even though it is the optimal metric only in the limit of infinite data.</p> + </li> + <li> + <p>The paper introduces a family of smoothed loss functions that are specially designed for top-k optimization.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1802.07595">Paper</a></p> + </li> + <li> + <p><a href="https://github.com/oval-group/smooth-topk">Code</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li>Inspired by the multi-loss SVMs, a surrogate loss (l<sub>k</sub>) is introduced that creates a margin between the ground truth and the kth largest score.</li> +</ul> + +<p><img src="https://github.com/shagunsodhani/papers-I-read/raw/master/assets/topk/eq1.png" alt="Equation 1" /></p> + +<ul> + <li> + <p>Here <strong>s</strong> denotes the output of the classifier model to be learnt, <em>y</em> is the ground truth label, <em>s[p]</em> denotes the kth largest element of <strong>s</strong> and <strong>s\p</strong> denotes the vector <strong>s</strong> without <em>p</em>th element.</p> + </li> + <li> + <p>This l<sub>k</sub> loss has two limitations:</p> + + <ul> + <li> + <p>It is continous but not differentiable in <em>s</em>.</p> + </li> + <li> + <p>Its weak derivatives have at most 2-nonzero elements.</p> + </li> + </ul> + </li> + <li> + <p>The loss can be reformulated by adding and subtracting the k-1 largest scores of <strong>s\y</strong> and <em>s<sub>y</sub></em> and by introducing a temperature parameter τ.</p> + </li> +</ul> + +<p><img src="https://github.com/shagunsodhani/papers-I-read/raw/master/assets/topk/eq2.png" alt="Equation 2" /></p> + +<h2 id="properties-of-lkτ">Properties of L<sub>kτ</sub></h2> + +<ul> + <li> + <p>For any τ &gt; 0, L<sub>kτ</sub> is infinite-differentiable and has non-sparse gradients.</p> + </li> + <li> + <p>Under mild conditions, L<sub>kτ</sub> apporachs l<sub>k</sub> (in a pointwise sense) as τ approaches to 0+<sup>+</sup>.</p> + </li> + <li> + <p>It is an upper bound on the actual loss (up to a constant factor).</p> + </li> + <li> + <p>It is a generalization of the cross-entropy loss for different values of k, and τ and higher margins.</p> + </li> +</ul> + +<h2 id="computational-challenges">Computational Challenges</h2> + +<ul> + <li> + <p><em>nCk</em> number of terms needs to be evaluated for computing the loss for one sample (n is number of classes).</p> + </li> + <li> + <p>Loss L<sub>kτ</sub> can be expressed in terms of elementary symmetric polynomials σ<sub>i</sub>(<strong>e</strong>) (sum of all products of i distinct elements of vector e). Thus the challenge is to compute σ<sub>k</sub> efficiently.</p> + </li> +</ul> + +<h3 id="forward-computation">Forward Computation</h3> + +<ul> + <li> + <p>Compute σ<sub>k</sub>(<strong>e</strong>) where <strong>e</strong> is a n-dimensional vector and k« n and e[i]!=0 for all i.</p> + </li> + <li> + <p>σ<sub>i</sub>(<em>e</em>) can be computed using the coefficients of the polynomial (X+e<sub>1</sub>)(X+e<sub>2</sub>)…(X+e<sub>n</sub>) by divide and conquer approach with polynomial multiplication.</p> + </li> + <li> + <p>With some more optimizations (eg log(n) levels of recursion and each level being parallelized on a GPU), the resulting algorithms scale well with n on a GPU.</p> + </li> + <li> + <p>Operations are performed in the log-space using the log-sum-exp trick to achieve numerical stability in single floating point precision.</p> + </li> +</ul> + +<h3 id="backward-computation">Backward computation</h3> + +<ul> + <li> + <p>The backward pass uses optimizations like computing derivative of σ<sub>j</sub> with respect to e<sub>i</sub> in a recursive manner.</p> + </li> + <li> + <p>Appendix of the paper describes these techniques in detail.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Experiments are performed on CIFAR-100 (with noise) and Imagenet.</p> + </li> + <li> + <p>For CIFAR-100 with noise, the labels are randomized with probability p (within the same top-level class).</p> + </li> + <li> + <p>The proposed loss function is very robust to both noise and reduction in the amount of training dataset as compared to cross-entropy loss function for both top-k and top-1 performance.</p> + </li> +</ul> + + + + + Hindsight Experience Replay + + 2018-12-18T00:00:00-05:00 + /site/2018/12/18/Hindsight Experience Replay + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Hindsight Experience Replay(HER) is a sample efficient technique to learn from sparse rewards.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1707.01495">Link to the paper</a></p> + </li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>Assume a footballer misses the goal narrowly. Even though the player does not get any “reward”(in terms of goal), the player realizes that had the goal post been shifted a bit, it would have resulted in a goal(reward).</p> + </li> + <li> + <p>The same intuition is applied for the RL agent - let us say that the true goal state was <em>g</em> while the agent ends up in the state <em>s</em>.</p> + </li> + <li> + <p>While the action sequence is not useful for reaching the goal state <em>g</em>, it is indeed useful for reaching state <em>s</em>. Hence the trajectory could be replayed with the goal as <em>s</em>(and not <em>g</em>).</p> + </li> +</ul> + +<h2 id="technical-details">Technical Details</h2> + +<ul> + <li> + <p>Multi-goal policy trained using Universal Value Function Approximation (UVFA).</p> + </li> + <li> + <p>Every episode starts by sampling a start state and a goal state. Each goal has a different reward function.</p> + </li> + <li> + <p>Policy uses both the current state and the current goal state and leads to a state transition sequence <em>s<sub>1</sub>, s<sub>2</sub>,…, s<sub>n</sub></em>.</p> + </li> + <li> + <p>Each of these transitions <em>s<sub>i</sub> -&gt; s<sub>i+1</sub></em> are stored in a buffer with both the original goal and a subset of the other goals.</p> + </li> + <li> + <p>For the goal selection, following strategies are tried:</p> + + <ul> + <li> + <p><em>Future</em> - goal state is the state <em>k</em> steps after observing the state transition.</p> + </li> + <li> + <p><em>Final</em> - goal state is the final state of the current episode.</p> + </li> + <li> + <p><em>Episode</em> - <em>k</em> random states are selected from the current episode.</p> + </li> + <li> + <p><em>Randon</em> - <em>k</em> states are selected randomly.</p> + </li> + </ul> + </li> + <li> + <p>Any off-policy algorithm can be used. Specifically, DDPG is used.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Robotic arm simulated using MuJoCo for <em>push</em>, <em>slide</em> and <em>pick and place</em> tasks.</p> + </li> + <li> + <p>DDPG with and without HER evaluated on the 3 tasks.</p> + </li> + <li> + <p>DDPG with the HER variant significantly outperforms the baseline in all the cases.</p> + </li> +</ul> + + + + + Representation Tradeoffs for Hyperbolic Embeddings + + 2018-12-11T00:00:00-05:00 + /site/2018/12/11/Representation Tradeoffs for Hyperbolic Embeddings + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes a combinatorial approach to embed trees into hyperbolic spaces without performing optimization.</p> + </li> + <li> + <p>The resulting mechanism is analyzed to obtain dimensionality-precision tradeoffs.</p> + </li> + <li> + <p>To embed any metric spaces in the hyperbolic spaces, a hyperbolic generalization of the multidimensional scaling (h-MDS) is proposed.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1804.03329">Link to the paper</a></p> + </li> +</ul> + +<h2 id="preliminaries">Preliminaries</h2> + +<ul> + <li> + <p>Hyperbolic Spaces</p> + + <ul> + <li> + <p>Have the “tree” like property ie the shortest path between a pair of points is almost the same as the path through the origin.</p> + </li> + <li> + <p>Generally, Poincare ball model is used given its advantages like conformity to the Euclidean spaces.</p> + </li> + </ul> + </li> + <li> + <p>Fidelity Measures</p> + + <ul> + <li> + <p>Mean Average Precision - MAP</p> + + <ul> + <li>A local metric that ranks between distances of the immediate neighbors.</li> + </ul> + </li> + <li> + <p>Distortion</p> + + <ul> + <li>A global metric that depends on the underlying distances and not just the local relationship between distances.</li> + </ul> + </li> + </ul> + </li> +</ul> + +<h2 id="combinatorial-construction-for-embedding-hierarchies-into-hyperbolic-spaces">Combinatorial Construction for embedding hierarchies into Hyperbolic spaces</h2> + +<ul> + <li> + <p>Embed the given graph <em>G = (V, E)</em> into a tree <em>T</em>.</p> + </li> + <li> + <p>Embed the tree <em>T</em> into the poincare ball <em>H<sub>d</sub></em> of dimensionality <em>d</em>.</p> + </li> +</ul> + +<h3 id="sarkars-construction-to-embed-points-in-a-2-d-poincare-ball">Sarkar’s construction to embed points in a 2-d Poincare ball</h3> + +<ul> + <li> + <p>Consider two points <em>a</em> and <em>b</em> (from the tree) where <em>b</em> is the parent of <em>a</em>.</p> + </li> + <li> + <p>Assume that <em>a</em> is embedded as <em>f(a)</em> and <em>b</em> is embedded as <em>f(b)</em> and the children of <em>a</em> needs to be embedded.</p> + </li> + <li> + <p>Reflect <em>f(a)</em> and <em>f(b)</em> across a geodesic such that <em>f(a)</em> is mapped to 0 (origin) while <em>f(b)</em> is mapped to some new point <em>z</em>.</p> + </li> + <li> + <p>Children of <em>a</em> are placed at points <em>y<sub>i</sub></em> which are equally placed around a circle of radius <em>(e<sup>r</sup> - 1) / (e<sup>r</sup> + 1)</em> and maximally seperated from <em>z</em>, where <em>r</em> is the scaling factor.</p> + </li> + <li> + <p>Then all the points are reflected back across the geodesic so that all children are at a distance <em>r</em> from <em>f(a)</em>.</p> + </li> + <li> + <p>To embed the tree itself, place the root node at the origin, place its children around it in a circle, then place their children and so on.</p> + </li> + <li> + <p>In this construct, precision scales logarithmically with the degree of the tree but linearly with the maximum path length.</p> + </li> +</ul> + +<h3 id="d-dimensional-hyperbolic-spaces"><em>d</em>-dimensional hyperbolic spaces</h3> + +<ul> + <li> + <p>In the <em>d</em>-dimensional space, the points are embedded into hyperspheres (instead of circles).</p> + </li> + <li> + <p>The number of children node that can be placed for a particular angle grows with the dimension.</p> + </li> + <li> + <p>Increasing dimension helps with bushy trees (with high node degree).</p> + </li> +</ul> + +<h2 id="hyperbolic-multidimensional-scaling-h-mds">Hyperbolic multidimensional scaling (h-MDS)</h2> + +<ul> + <li> + <p>Given the pairwise distance from a set of points in the hyperbolic space, how to recover the points?</p> + </li> + <li> + <p>The corresponding problem in the Euclidean space is solved using MDS.</p> + </li> + <li> + <p>A variant of MDS called as h-MDS is proposed.</p> + </li> + <li> + <p>MDS makes a centering assumption that points have 0 mean. In h-MDS, a new mean (called as the pseudo-Euclidean mean) is introduced to enable recovery via matrix factorization.</p> + </li> + <li> + <p>Instead of the Poincare model, the hyperboloid model is used (though the points can be mapped back and forth).</p> + </li> +</ul> + +<h3 id="pseudo-euclidean-mean">pseudo-Euclidean Mean</h3> + +<ul> + <li>A set of points can always be centered without affecting their pairwise distance by simply finding their mean and sending it to 0 via isometry</li> +</ul> + +<h3 id="recovery-via-matrix-factorization">Recovery via matrix factorization</h3> + +<ul> + <li> + <p>Given the pairwise distances, a new matrix <em>Y</em> is constructed by applying <em>cosh</em> on the pairwise distances.</p> + </li> + <li> + <p>Running PCA on <em>-Y</em> recovers X up to rotation.</p> + </li> +</ul> + +<h2 id="dimensionality-reduction-with-pga-principal-geodesic-analysis">Dimensionality Reduction with PGA (Principal Geodesic Analysis)</h2> + +<ul> + <li> + <p>PGA is the counterpart of PCA in the hyperbolic spaces.</p> + </li> + <li> + <p>First the <em>Karcher</em> mean of the given points is computed.</p> + </li> + <li> + <p>All points <em>x<sub>i</sub></em> are reflected so that their mean is 0 in the Poincare disk model.</p> + </li> + <li> + <p>Combining that with Euclidean reflection formula and hyperbolic metrics leads to a non-convex loss function which can be optimized using gradient descent algorithm.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Datasets</p> + + <ul> + <li>Trees: fully balanced and phylogenic trees expressing genetic heritage.</li> + <li>Tree-like hierarchy: WordNet hypernym and graph of Ph.D. advisor-advisee relationships.</li> + <li>No-tree like disease relationships, proteins interactions etc</li> + </ul> + </li> + <li> + <p>Results</p> + + <ul> + <li>Combinatorial construction outperforms approaches based on optimization in terms of both MAP and distortion.</li> + <li>eg on WordNet, the combinatorial approach achieves a MAP of 0.989 with just 2 dimensions while the previous best was 0.87 with 200 dimensions.</li> + </ul> + </li> +</ul> + + + + + + Learned Optimizers that Scale and Generalize + + 2018-11-01T00:00:00-04:00 + /site/2018/11/01/Learned Optimizers that Scale and Generalize + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces a learned gradient descent optimizer that has low memory and computational overhead and that generalizes well to new tasks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1703.04813">Link to the paper</a></p> + </li> +</ul> + +<h2 id="key-advantage">Key Advantage</h2> + +<ul> + <li> + <p>Uses a hierarchial RNN architecture augmented by features like adapted input an output scaling, momentum etc.</p> + </li> + <li> + <p>A meta-learning set of small diverse optimization tasks, with diverse loss landscapes is developed. The learnt optimizer generalizes to much more complex tasks and setups.</p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>A hierarchical RNN is designed to act as a learned optimizer. This RNN is the meta-learner and its parameters are shared across different tasks.</p> + </li> + <li> + <p>The learned optimizer takes as input the gradient (and related metadata) for each parameter and outputs the update to the parameters.</p> + </li> + <li> + <p>At the lowest level of hierarchical, a small “parameter RNN” ingests the gradient (and related metadata).</p> + </li> + <li> + <p>One level up, an intermediate “Tensor RNN” incorporates information from a subset of Parameter RNNS (eg one Tensor RNN per layer of feedforward network).</p> + </li> + <li> + <p>At the highest level is the glocal RNN which receives input from all the Tensor RNNs and can keep track of weight updates across the task.</p> + </li> + <li> + <p>the input of each RNN is averaged and fed as input to the subsequent RNN and the output of each RNN is fed as bias to the previous RNN.</p> + </li> + <li> + <p>In practice, the hidden states are fixed at 10, 30 and 20 respectively.</p> + </li> +</ul> + +<h2 id="features-inspired-from-existing-optimizers">Features inspired from existing optimizers</h2> + +<ul> + <li> + <p>Attention and Nesterov’s momentum</p> + + <ul> + <li> + <p>Attention mechanism is incorporated by attending to new regions of the loss surface (which are an offset from previous parameter location).</p> + </li> + <li> + <p>To incorporate momentum on multiple timescales, the exponential moving average of the gradient at several timescales is also provided as input.</p> + </li> + <li> + <p>The average gradients are rescaled (as in RMSProp and Adam)</p> + </li> + <li> + <p>Relative log gradient magnitudes are also provided as input so that the optimizer can access how the gradient magnitude changes with time.</p> + </li> + </ul> + </li> +</ul> + + + + + One-shot Learning with Memory-Augmented Neural Networks + + 2018-10-25T00:00:00-04:00 + /site/2018/10/25/One-shot Learning with Memory-Augmented Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper demonstrates that Memory Augmented Neural Networks (MANN) are suitable for one-shot learning by introducing a new method for accessing an external memory.</p> + </li> + <li> + <p>This method focuses on memory content while earlier methods additionally used memory location based focusing mechanisms.</p> + </li> + <li> + <p>Here, MANN refers to neural networks that have an external memory. This includes Neural Turning Machines (NTMs) and excludes LSTMs.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1605.06065">Link to the paper</a></p> + </li> +</ul> + +<h2 id="meta-learning">Meta-Learning</h2> + +<ul> + <li> + <p>In meta-learning, a learner is learning at two levels.</p> + </li> + <li> + <p>The learner is shown a sequence of tasks D<sub>1</sub>, D<sub>2</sub>, …, D<sub>T</sub>.</p> + </li> + <li> + <p>When it is training on one of the datasets (say D<sub>T</sub>), it learns to solve the current dataset.</p> + </li> + <li> + <p>At the same time, the learner tries to incorporate knowledge about how task structure changes across different datasets (second level of learning).</p> + </li> +</ul> + +<h2 id="mann--meta-learning">MANN + Meta Learning</h2> + +<ul> + <li> + <p>Following are the desirable characteristics for a scalable, combined architecture:</p> + + <ul> + <li> + <p>Memory representation should be both stable and element-wise accessible.</p> + </li> + <li> + <p>Number of model parameters should not be tied to the size of the memory.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="task-setup">Task Setup</h2> + +<ul> + <li> + <p>In standard learning, the goal is to reduce error on some dataset D. In meta-learning, the goal is to reduce the error across a distribution of datasets p(D).</p> + </li> + <li> + <p>Each dataset is presented to the model in the form (x<sub>1</sub>, null), (x<sub>1</sub>, y<sub>0</sub>), …, (x<sub>t+1</sub>, y<sub>t</sub>) where y<sub>t</sub> is the correct label (or value) corresponding to the inpuit x<sub>t</sub>.</p> + </li> + <li> + <p>Further, the data labels are shuffled from dataset to dataset.</p> + </li> + <li> + <p>The model must learn to hold the data samples in memory till the appropriate candidate labels are presented in the next step.</p> + </li> + <li> + <p>The idea is that a model that meta learns would learn to map data representation to correct labels regardless of the actual context of data representation or the label.</p> + </li> + <li> + <p>The paper uses NTM as the MANN with one modification.</p> + </li> + <li> + <p>In the original formulation, the memories were addressed by both context and location. Location-based addressing is not optimal for the current setup where information encoding is not independent of the sequence.</p> + </li> + <li> + <p>A new access module - LRUA - Least Recent Used Access - is used to write to memory.</p> + </li> + <li> + <p>LRUA is purely content-based and writes to either least used memory location (to preserve recent information) or most recently used memory location (to overwrite recent information with more relevant information). This is decided on the basis of interpolation between previous read weights and weights scaled according to the usage weight.</p> + </li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li> + <p>Omniglot (classification)</p> + </li> + <li> + <p>Sampled functions from Gaussian Processes</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>For the omniglot dataset, the model was trained with various combinations of randomly chosen classes with randomly chosen labels.</p> + </li> + <li> + <p>As baselines, following models were considered:</p> + + <ul> + <li>Regular NTM</li> + <li>LSTM</li> + <li>Feedforward RNN</li> + <li>Nearest Neighbour Classifier</li> + </ul> + </li> + <li> + <p>Since each episode (dataset created by the combination of classes) contains unique classes (with their own unique labels) it is important to clear the memory across different episodes.</p> + </li> + <li> + <p>For the regression task, the data was generated from a GP prior with a fixed set of hyper-parameters which resulted in different functions.</p> + </li> + <li> + <p>For both the tasks, the MANN architecture outperforms the LSTM architecture baseline NTMs.</p> + </li> +</ul> + + + + + + BabyAI - First Steps Towards Grounded Language Learning With a Human In the Loop + + 2018-10-18T00:00:00-04:00 + /site/2018/10/18/BabyAI-First Steps Towards Grounded Language Learning With a Human In the Loop + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>BabyAI is a research platform to investigate and support the feasibility of including humans in the loop for grounded language learning.</p> + </li> + <li> + <p>The setup is a series of levels (of increasing difficulty) to train the agent to acquire a synthetic language (Baby Language) which is a proper subset of English language.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1810.08272">Link to the paper</a></p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>BabyAI platform provides support for curriculum learning and interactive learning as part of its human-in-the-loop training setup.</p> + </li> + <li> + <p>Curriculum learning is incorporated by having a curriculum of levels of increasing difficulty.</p> + </li> + <li> + <p>Interactive learning is supported by including a heuristic expert which can provide new demonstrations on the fly to the learning agent.</p> + </li> + <li> + <p>The heuristic expert can be thought of as the human-in-the-loop which can guide the agent through the learning process.</p> + </li> + <li> + <p>One downside of human-in-the-loop is the poor sample complexity of the learning agent. The heuristic agent can be used to estimate the sample efficiency.</p> + </li> +</ul> + +<h2 id="contribution">Contribution</h2> + +<ul> + <li> + <p>BabyAI research platform for grounded language learning with a simulated human-in-the-loop.</p> + </li> + <li> + <p>Baseline results for performance and sample efficiency for the different tasks.</p> + </li> +</ul> + +<h2 id="babyai-platform">BabyAI Platform</h2> + +<h3 id="environment">Environment</h3> + +<ul> + <li> + <p>MiniGrid - A partially observable 2D grid-world environment.</p> + </li> + <li> + <p>Entities - Agent, ball, box, door, keys</p> + </li> + <li> + <p>Actions - pick, drop or move objects, unlock doors etc.</p> + </li> +</ul> + +<h3 id="baby-language">Baby Language</h3> + +<ul> + <li> + <p>Synthetic Language (a proper subset of English) - Used to give instructions to the agent</p> + </li> + <li> + <p>Support for verifying if the task (and the subtasks) are completed or not</p> + </li> +</ul> + +<h3 id="levels">Levels</h3> + +<ul> + <li> + <p>A level is an instruction-following task.</p> + </li> + <li> + <p>Formally, a level is a distribution of missions - a combination of initial state of the environment and an instruction (in Baby Language)</p> + </li> + <li> + <p>Motivated by curriculum learning, the authors create a series of tasks (with increasing difficulty).</p> + </li> + <li> + <p>A subset of skills (competencies) is required for solving each task. The platform takes into account this constraint when creating a level.</p> + </li> +</ul> + +<h3 id="heuristic-expert">Heuristic Expert</h3> + +<ul> + <li> + <p>The platform supports a Heuristic expert that simulates the role of a human teacher and knows how to solve each task.</p> + </li> + <li> + <p>For any level, it can suggest actions or generate demonstrations (given the state of the environment).</p> + </li> +</ul> + +<h2 id="experiment">Experiment</h2> + +<ul> + <li> + <p>An imitation learning baseline is trained for each level.</p> + </li> + <li> + <p>Data requirement for each level and the benefits of curriculum learning and imitation learning are investigated (in terms of sample efficiency).</p> + </li> +</ul> + +<h2 id="model-architecture">Model Architecture</h2> + +<ul> + <li> + <p>GRU to encode the sentence, CNN to encode the input observation</p> + </li> + <li> + <p>FiLM layer to combine the two representations</p> + </li> + <li> + <p>LSTM to encode the per-timestep FiLM encoding (timesteps in the environment)</p> + </li> + <li> + <p>Two model variants are considered:</p> + + <ul> + <li> + <p>Large Model - Bidirectional GRU + attention + large hidden state</p> + </li> + <li> + <p>Small Model - Unidirectional GRU + No attention + small hidden state</p> + </li> + </ul> + </li> + <li> + <p>Heuristic expert used to generate trajectory and the models are trained by imitation learning (to be used as baselines)</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>The key takeaway is that the current deep learning approaches are extremely sample inefficient when learning a compositional language.</p> + </li> + <li> + <p>Data efficiency of RL methods is much worse than that of imitation learning methods showing that the current imitation learning and reinforcement learning methods scale and generalize poorly.</p> + </li> + <li> + <p>Curriculum-based pretraining and interactive learning was found to be useful in only some cases.</p> + </li> +</ul> + + + + + + Poincaré Embeddings for Learning Hierarchical Representations + + 2018-10-11T00:00:00-04:00 + /site/2018/10/11/Poincare Embeddings for Learning Hierarchical Representations + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Much of the work in representation leaning uses Euclidean vector spaces to embed datapoints (like words, nodes, entities etc).</p> + </li> + <li> + <p>This approach is not effective when data has a (latent) hierarchical structure.</p> + </li> + <li> + <p>The paper proposes to compute the embeddings in the hyperbolic space so as to preserve both the similarity and structure information.</p> + </li> + <li> + <p><a href="https://arxiv.org/pdf/1705.08039.pdf">Link to the paper</a></p> + </li> +</ul> + +<h2 id="hyperbolic-geometry">Hyperbolic Geometry</h2> + +<ul> + <li> + <p>Hyperbolic spaces are spaces with a constant negative curvature while Euclidean spaces have zero curvature.</p> + </li> + <li> + <p>The hyperbolic disc area and circle length increase exponentially with the radius r while in Euclidean space, it increases quadratically and linearly respectively.</p> + </li> + <li> + <p>This makes the hyperbolic space more suitable for embedding tree-like structures where the number of nodes increases as we move away from the root.</p> + </li> + <li> + <p>Hyperbolic spaces can be thought of as the continuous version of trees and trees can be thought of as the discrete version of hyperbolic spaces.</p> + </li> +</ul> + +<h2 id="poincare-embeddings">Poincare Embeddings</h2> + +<ul> + <li> + <p>Poincare model is one of the several possible models of the hyperbolic space and is considered here as it is more amenable to gradient-based optimisation.</p> + </li> + <li> + <p>Distance between 2 pints change smoothly and is symmetric. Thus the hierarchical organisation only depends on the distance from the origin which makes the model applicable in settings where the hierarchical structure needs to be inferred from the data.</p> + </li> + <li> + <p>Eventually the norm of a point represents its hierarchy and distance between the points represents similarity.</p> + </li> +</ul> + +<h2 id="optimization">Optimization</h2> + +<ul> + <li>RSGD (Riemannian SGD) method is used.</li> + <li>Riemannian gradients can be computed from the Euclidean gradients by rescaling with the inverse of the Poincare ball metric tensor.</li> + <li>The embeddings are constrained to be within the Poincare ball by projection operation which normalizes the magnitude of embeddings to be 1.</li> +</ul> + +<h2 id="training-details">Training Details</h2> + +<ul> + <li>Initializing the embeddings close to 0 (by sampling uniformly from (-0.001, 0.001)) helps.</li> + <li>The model is trained for an initial burn-out period of 10 epochs with 0.1 times the learning rate so as to find a better initial angular layout.</li> +</ul> + +<h2 id="evaluation">Evaluation</h2> + +<ul> + <li> + <p>Embedding taxonomy for wordnet task</p> + + <ul> + <li> + <p>Setup</p> + + <ul> + <li>Reconstruction</li> + <li>Link Prediction</li> + </ul> + </li> + <li> + <p>The input data is a collection of a pair of words (u, v) which are related to each other.</p> + </li> + <li> + <p>For each word pair, 10 negative samples of the form (u, v’) are sampled and the training procedure uses a soft ranking loss that aims to bring the related objects closer together.</p> + </li> + </ul> + </li> + <li> + <p>Network Embedding</p> + + <ul> + <li> + <p>Baselines</p> + + <ul> + <li>Euclidean Embeddings</li> + <li>Translational Embedding where a relation vector corresponding to the edge type is also learnt.</li> + </ul> + </li> + <li> + <p>Datasets</p> + + <ul> + <li>ASTROPH</li> + <li>CONDMAT</li> + <li>GRQC</li> + <li>HEPPH</li> + </ul> + </li> + </ul> + </li> + <li> + <p>Lexical Entailment</p> + </li> +</ul> + +<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* Hyperlex - Gold standard to evaluate how well the semantics models capture lexical entailment on a scale of [0, 10]. + +* The key takeaway is that for all the datasets/setups, hyperbolic embeddings give a performance benefit when the embedding dimension is small. +</code></pre></div></div> + +<h2 id="challenges">Challenges</h2> + +<ul> + <li> + <p>Hyperbolic embeddings are not suitable for all the datasets. Eg if the dataset is not tree-like or has cycles.</p> + </li> + <li> + <p>Hyperbolic embeddings are difficult to optimize as each operation needs to be modified to be usable in the hyperbolic space.</p> + </li> +</ul> + + + + + When Recurrent Models Don’t Need To Be Recurrent + + 2018-10-04T00:00:00-04:00 + /site/2018/10/04/When Recurrent Models Don’t Need To Be Recurrent + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper explores “if a well behaved RNN can be replaced by a feed-forward network of comparable size without loss in performance.”</p> + </li> + <li> + <p>“Well behaved” is defined in terms of control-theoretic notion of stability. This roughly requires that the gradients do not explode over time.</p> + </li> + <li> + <p>The paper shows that under the stability assumption, feedforward networks can approximate RNNs for both training and inference. The results are empirically validated as well.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1805.10369">Link to the paper</a></p> + </li> +</ul> + +<h2 id="problem-setting">Problem Setting</h2> + +<ul> + <li> + <p>Consider a general, non linear dynamical system given by a differential state transition map Φ<sub>w</sub>. The hidden h<sub>t</sub> = Φ<sub>w</sub>(h<sub>t-1</sub>, x<sub>t</sub>).</p> + </li> + <li> + <p>Assumptions:</p> + + <ul> + <li>Φ is smooth in w and h.</li> + <li>h<sub>0</sub> = 0</li> + <li>Φ<sub>w</sub>(0, 0) = 0 (can be ensured by translation)</li> + </ul> + </li> + <li> + <p>Stable models are the ones where Φ is contractive ie Φ<sub>w</sub>(h, x) - Φ<sub>w</sub>(h’, x) is less than Λ * (h - h’)</p> + </li> + <li> + <p>For example, in RNN, stability would require that norm(w) is less than (L<sub>p</sub>)<sup>-1</sup> where L<sub>p</sub> is the Lipschitz constant of the point-wise non linearity used.</p> + </li> + <li> + <p>The feedforward approximation uses a finite context (of length k) and is a truncated model.</p> + </li> + <li> + <p>A non-parametric function f maps the output of the recurrent model to prediction. If f is desired to be a parametric model, its parameters can be pushed to the recurrent model.</p> + </li> +</ul> + +<h2 id="theoretical-results">Theoretical Results</h2> + +<ul> + <li> + <p>For a Λ-contractive system, it can be proved that for a large k (and additional Lipschitz assumptions) the difference in prediction between the recurrent and truncated mode is negligible.</p> + </li> + <li> + <p>If the recurrent model and truncated feed-forward network are initialized at the same point and trained over the same input for N-step, then for an optimal k, the weights of the two models would be very close in the Euclidean space. It can be shown that this small difference does not lead to large gradient differences during subsequent update steps.</p> + </li> + <li> + <p>This can be roughly interpreted as - if the gradient descent can train a stable recurrent network, it can also train a feedforward model and vice-versa.</p> + </li> + <li> + <p>The stability condition is important as, without that, truncated models would be bad (even for large values of k). Further, it is difficult to show that gradient descent converges to a stationary point.</p> + </li> +</ul> + + + + + HoME - a Household Multimodal Environment + + 2018-09-27T00:00:00-04:00 + /site/2018/09/27/HoME - a Household Multimodal Environment + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Environment for learning using modalities like vision, audio, semantics, physics and interaction with objects and other agents.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1711.11017">Link to the paper</a></p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>Humans learn by interacting with their surroundings (environment).</p> + </li> + <li> + <p>Similarly training an agent in an interactive multi-model environment (virtual embodiment) could be useful for a learning agent.</p> + </li> +</ul> + +<h2 id="characteristics">Characteristics</h2> + +<ul> + <li> + <p>Open-source and Open-AI gym compatible</p> + </li> + <li> + <p>Built on top of 45000 3D house layouts from SUNCG dataset.</p> + </li> + <li> + <p>Provides both 3D visual and audio recording.</p> + </li> + <li> + <p>Semantic image segmentation and langauge description of objects.</p> + </li> +</ul> + +<h2 id="components">Components</h2> + +<ul> + <li> + <p>Rendering Engine</p> + + <ul> + <li> + <p>Implemented using Panda 3D game engine.</p> + </li> + <li> + <p>Renders RGB+depth scenes based on textures, multi-source lightings and shadows.</p> + </li> + </ul> + </li> + <li> + <p>Acoustic Engine</p> + + <ul> + <li> + <p>Implemented using EVERT</p> + </li> + <li> + <p>Supports multiple microphones, sound sources, sound absorption based on material, atmospheric conditions etc.</p> + </li> + </ul> + </li> + <li> + <p>Semantics Engine</p> + + <ul> + <li>Provides a short textual description for each object, along with information like color, category, material size, location etc.</li> + </ul> + </li> + <li> + <p>Physics Engine</p> + + <ul> + <li> + <p>Implemented using Bullet3 Engine</p> + </li> + <li> + <p>Supports physical interaction, external forces like gravity and position and velocity information for multiple agents.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="potential-applications">Potential Applications</h2> + +<ul> + <li> + <p>Visual Question Answering</p> + </li> + <li> + <p>Conversational Agents</p> + </li> + <li> + <p>Training an agent to follow instructions</p> + </li> + <li> + <p>Multi-agent communication</p> + </li> +</ul> + + + + + Emergence of Grounded Compositional Language in Multi-Agent Populations + + 2018-09-12T00:00:00-04:00 + /site/2018/09/12/Emergence of Grounded Compositional Language in Multi-Agent Populations + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper provides a multi-agent learning environment and proposes a learning approach that facilitates the emergence of a basic compositional language.</p> + </li> + <li> + <p>The language is quite rudimentary and is essentially a sequence of abstract discrete symbols. But it does comprise of a defined vocabulary and syntax.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1703.04908">Link to the paper</a></p> + </li> +</ul> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Cooperative, partially observable Markov game (multi-agent extension of MDP).</p> + </li> + <li> + <p>All agents have identical action and observation spaces, use the same policy and receive a shared reward.</p> + </li> +</ul> + +<h3 id="grounded-communication-environment">Grounded Communication Environment</h3> + +<ul> + <li> + <p>Physically simulated 2-D environment in continuous space and discrete time with N agents and M landmarks.</p> + </li> + <li> + <p>The agents and the landmarks would occupy some location and would have some attributes (colour, shape).</p> + </li> + <li> + <p>Within the environment, the agents can <em>go to</em> a location, <em>look</em> at a location or <em>do nothing</em>. Additionally, they can utter communication symbols c (from a shared vocabulary C). Agents themselves learn to assign a meaning to the symbols.</p> + </li> + <li> + <p>Each agent has an internal goal (which could require interaction with other agents to complete) which the other agents cannot see.</p> + </li> + <li> + <p>Goal for agent <em>i</em> consists of an action to perform, a landmark location where to perform the action and another agent who should be performing the action.</p> + </li> + <li> + <p>Since the agent is continuously emitting symbols, a memory module is provided and simple additive memory updates are done.</p> + </li> + <li> + <p>For interaction, the agents could use verbal utterances, non-verbal signals (gaze) or non-communicative strategies (pushing other agents).</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>A model of all agent and environment state dynamics is created over time and the return gradient is computed.</p> + </li> + <li> + <p>Gumbel-Softmax distribution is used to obtain categorical word emission c.</p> + </li> + <li> + <p>A multi-layer perceptron is used to model the policy which returns action, communication symbol and the memory update for each agent.</p> + </li> + <li> + <p>Since the number of agents (and hence the number of communication streams etc) can vary across instantiations, an identical model is instantiated per agent and per communication stream.</p> + </li> + <li> + <p>The output of individual processing modules are pooled into feature vectors corresponding to communication and physical observations. These pooled features and the goal vectors are fed to the final processing module from which actions and categorical symbols are sampled.</p> + </li> + <li> + <p>In practice, using an additional task (each agent predicts the goal for another agent) encouraged more meaningful communication utterances.</p> + </li> +</ul> + +<h3 id="compositionality-and-vocabulary-size">Compositionality and Vocabulary Size</h3> + +<ul> + <li> + <p>Authors recommend using a large vocabulary with a soft penalty that discourages use of too many words. This leads to use of a large vocabulary in the intermediate state which converges to a small vocabulary.</p> + </li> + <li> + <p>Along the lines of rich gets richer dynamics, the communication symbol c’s are modelled as being generated by a Dirichlet process. The resulting reward across all agents is the log-likelihood of all communication utterances to have been generated by a Dirichlet process.</p> + </li> + <li> + <p>Since the agents can only communicate in discrete symbols and do not have a global positioning reference, they need to unambiguously communicate landmark references to other agents.</p> + </li> +</ul> + +<h2 id="case-i---agents-can-not-see-each-other">Case I - Agents can not see each other</h2> + +<ul> + <li> + <p>Non-verbal communication is not possible.</p> + </li> + <li> + <p>When trained with just 2 agents, symbols are assigned for each landmark and action.</p> + </li> + <li> + <p>As the number of agents is increased, additional symbols are used to refer to agents.</p> + </li> + <li> + <p>If the agents of the same colour are asked to perform conflicting tasks, they perform the average of conflicting tasks. If distractor locations are added, the agents learn to ignore them.</p> + </li> +</ul> + +<h2 id="non-verbal-communication">Non-verbal communication</h2> + +<ul> + <li> + <p>Agents are allowed to observe other agents’ position, gaze etc.</p> + </li> + <li> + <p>Now the location can be pointed to using gaze.</p> + </li> + <li> + <p>If gaze is disabled, the agent could indicate the goal landmark by moving to it.</p> + </li> + <li> + <p>Basically even when the communication is disabled the agents can come up with strategies to complete the task.</p> + </li> +</ul> + + + + + A Semantic Loss Function for Deep Learning with Symbolic Knowledge + + 2018-08-21T00:00:00-04:00 + /site/2018/08/21/A Semantic Loss Function for Deep Learning with Symbolic Knowledge + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper proposes an approach for using symbolic knowledge in deep learning systems. These constraints are often expressed as boolean constraints on the output of the deep learning system and directly incorporating these constraints break the differentiability of the system.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1711.11157">Link to the paper</a></p> + </li> +</ul> + +<h2 id="problem-setting">Problem Setting</h2> + +<ul> + <li> + <p>The model is given some input data to perform predictions and symbolic knowledge is provided in form of boolean constraints like exactly-one constraint for one-hot output encoding.</p> + </li> + <li> + <p>Most approaches tend to encode the symbolic knowledge in the vector space embedding to keep the model pipeline differentiable. In this process, the precise meaning of symbolic knowledge is often lost.</p> + </li> + <li> + <p>A differentiable “semantic loss” is derived which captures the meaning of the constraint while being independent of its syntax.</p> + </li> +</ul> + +<h2 id="terminology">Terminology</h2> + +<ul> + <li> + <p>A state <strong>x</strong> (state refers to the instantiation of boolean variables) satisfies a sentence <em>a</em> if <em>a</em> evaluates to true when using the variables as specified by <strong>x</strong>.</p> + </li> + <li> + <p>A sentence <em>a</em> entails another sentence <em>b</em> if all states that satisfy <em>a</em> also satisfy <em>b</em>.</p> + </li> + <li> + <p>The row output vector of the neural network is denoted as <em>p</em> where each value in <em>p</em> denotes the probability of an output.</p> + </li> + <li> + <p>Three different output constraints are studied:</p> + + <ul> + <li> + <p><em>Exactly-one constraint</em></p> + + <ul> + <li>Exactly one value in <em>p</em> should be true.</li> + <li>Can be expressed in boolean logic as follows: Let (x1, x2, …, xn) be variables in <em>p</em>. Then (not xi or not xj) for all pair of variables and (x1 or x2 or … xn).</li> + </ul> + </li> + <li><em>Valid Simple Path Constraint</em> + <ul> + <li>Set of edges must form a valid path.</li> + </ul> + </li> + <li><em>Ordering Constraint</em> + <ul> + <li>Defining an ordering over the variables.</li> + </ul> + </li> + </ul> + </li> +</ul> + +<h2 id="semantic-loss">Semantic Loss</h2> + +<ul> + <li> + <p>The semantic loss <em>L<sup>s</sup>(a, p)</em> is a function of a propositional logic sentence <em>a</em> (the symbolic knowldge constraint) and <em>p</em> (output of the neural network).</p> + </li> + <li> + <p><em>a</em> is defined over variables (x1, …, xn) and <em>p</em> is interpreted as a vector of probabilities corresponding to these variables <em>xi’s</em>.</p> + </li> + <li> + <p>The semantic loss is directly proportional to the negative log likelihood of generating a state that satisfies the constraints when sampling values according to the distribution <em>p</em>.</p> + </li> +</ul> + +<h2 id="main-axioms-and-insights">Main Axioms and Insights</h2> + +<ul> + <li><strong>Monotonicity</strong> + <ul> + <li>If a sentence <em>a</em> entails another sentence <em>b</em> then for any given <em>p</em>, <em>L<sup>s</sup>(a, p) &gt; L<sup>s</sup>(b, p)</em> ie adding more constraints cannot decrease the semantic loss.</li> + </ul> + </li> + <li><strong>Semantic Equivalence</strong> + <ul> + <li>If two sentences are logically equivalent, their semantic loss is the same.</li> + </ul> + </li> + <li><strong>Identity</strong> + <ul> + <li>For any given sentence <em>a</em>, its representation as a sentence is equivalent to its representation as a deterministic vector ie writing the “one-hot” constraint as a boolean expression is equivalent to a one-hot vector.</li> + </ul> + </li> + <li><strong>Satisfaction</strong> + <ul> + <li>If <em>p</em> entails the sentence <em>a</em> then <em>L<sup>s</sup>(a, p) = 0</em>.</li> + </ul> + </li> + <li><strong>Label-literal correspondence</strong> + <ul> + <li>When the constraint is defined in terms of a single variable, it can be interpreted as the supervised label.</li> + <li>Hence the semantic loss in case of a single variable should be equivalent to the cross-entropy loss.</li> + </ul> + </li> + <li><strong>Truth</strong> + <ul> + <li>The semantic loss of a true sentence is 0</li> + </ul> + </li> + <li><strong>Non-negativity</strong> + <ul> + <li>Semantic loss should always be non-negative.</li> + </ul> + </li> + <li> + <p>Probabilities of variables that are not part of the constraint, do not affect the semantic loss.</p> + </li> + <li>It can be shown that the semantic loss function satisfies all these axioms (and the other axioms specified in the paper) and is the only function to do so, up to a multiplicative constant.</li> +</ul> + +<h2 id="experimental-evaluation">Experimental Evaluation</h2> + +<ul> + <li> + <p>Semantic Loss is used in the semi-supervised setting for Permuted MNIST, Fashion MNIST and CIFAR-10.</p> + </li> + <li> + <p>The key takeaway is that using semantic loss improves the performance of the state-of-the-art models for Fashion MNIST and CIFAR-10.</p> + </li> + <li> + <p>One downside is that the effectiveness of the semantic loss in this type of constraint strongly depends on the performance of the underlying model. Further, the semantic loss does not improve the performance in case of fully supervised scenario.</p> + </li> + <li> + <p>Further experiments are performed to evaluate the performance of the semantic loss on complex constraints. Since these tasks aim to highlight the effect of using semantic loss, only simple models (MLPs) are evaluated.</p> + </li> +</ul> + +<h2 id="tractability-of-semantic-loss">Tractability of Semantic Loss</h2> + +<ul> + <li> + <p>The semantic loss is similar to the automated reasoning task called as weight model counting (wmc).</p> + </li> + <li> + <p>Circuit compiler techniques can be used to compute wmc while allowing backpropagation.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li>The proposed idea is simple and intuitive and the results on semi-supervised classification task are quite good. It would be interesting to extend and scale this method for more complex constraints.</li> +</ul> + + + + + Hierarchical Graph Representation Learning with Differentiable Pooling + + 2018-08-16T00:00:00-04:00 + /site/2018/08/16/Hierarchical Graph Representation Learning with Differentiable Pooling + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Most existing GNN (Graph Neural Network) methods are inherently flat and are unable to process the information in a hierarchical manner.</p> + </li> + <li> + <p>The paper proposes a differentiable graph pooling operation, DIFFPOOL, that can generate hierarchical graph representations and can be easily plugged into many GNN architectures.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1806.08804">Link to the paper</a></p> + </li> +</ul> + +<h2 id="key-idea">Key Idea</h2> + +<ul> + <li> + <p>CNNs have spatial pooling operation that allows for deep CNN architectures to operate on coarse graph representations of input images.</p> + </li> + <li> + <p>This notion cannot be applied as-is to graphs as they do not have a natural notion of spatial locality like images do.</p> + </li> + <li> + <p>DIFFPOOL attempts to resolve this problem by learning a differentiable soft-assignment at each layer which is equivalent to pooling the cluster of nodes to obtain a sparse representation.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Given a graph <em>G(A, F)</em>, where <em>A</em> is the adjacency matrix and <em>F</em> is the feature matrix.</p> + </li> + <li> + <p>Given a permutation invariant GNN that follows the message passing architecture. The output of this GNN can be expressed as <em>Z = GNN(A, X)</em> where <em>X</em> is the current feature matrix.</p> + </li> + <li> + <p>Goal is to stack <em>L</em> GNN layers on top of each other such that the <em>l<sup>th</sup></em> layer uses coarsened output from the <em>(l-1)<sup>th</sup></em> layer.</p> + </li> + <li> + <p>This coarsening operation uses a cluster assignment matrix <em>S</em>.</p> + </li> + <li> + <p>The learned cluster assignment matrix at layer <em>l</em> is denoted at <em>S<sup>l</sup></em></p> + </li> + <li> + <p>Given <em>S<sup>l</sup></em>, the embedding matrix for the <em>(l+1)<sup>th</sup></em> layer is given as <em>transpose(S<sub>l</sub>)Z<sub>l</sub></em> and adjancecy matrix is given by <em>transpose(S<sub>l</sub>)A<sub>l</sub>S<sub>l</sub></em></p> + </li> + <li> + <p>A new GNN, called as GNN<sub>pool</sub> is used to produce the assignment matrix <em>S</em> by taking a softmax over <em>GNN<sub>pool</sub>(A<sup>l</sup>, X<sup>l</sup>)</em></p> + </li> + <li> + <p>As long as the GNN model is permutation invariant, the resulting DIFFPOOL model is also permutation invariant.</p> + </li> +</ul> + +<h2 id="auxiliary-losses">Auxiliary Losses</h2> + +<ul> + <li> + <p>The paper uses 2 auxiliary losses to push the model away from spurious local minima early in the training.</p> + </li> + <li> + <p>Link prediction objective - at each layer, link prediction loss ( = A - S(transpose(S))) is minimized with the intuition that the nearby nodes should be pooled together.</p> + </li> + <li> + <p>Ideally, the cluster assignment for each node should be a one-hot vector so the entropy for cluster assignment per node is regularized.</p> + </li> +</ul> + +<h2 id="baselines">Baselines</h2> + +<ul> + <li>GNN based models + <ul> + <li>GraphSage + <ul> + <li>Mean pooling</li> + <li>Set2Set pooling</li> + <li>Sort pooling</li> + </ul> + </li> + <li>Structure2vec</li> + <li>Edge conditioned filters in CNN</li> + <li>PatchySan</li> + </ul> + </li> + <li>Kernel based models + <ul> + <li>Graphlet, shortest path etc</li> + </ul> + </li> +</ul> + +<h2 id="model-variants">Model Variants</h2> + +<ul> + <li>GraphSage + <ul> + <li>Mean pool + Diff pool (3 or 2 layers)</li> + </ul> + </li> + <li>Structure2Vec + Diffpool</li> + <li>Diffpool-Det + <ul> + <li>The assignment matrix <em>S</em> are generated using graph clustering algorithms.</li> + </ul> + </li> + <li>Diffpool-NoLP + <ul> + <li>The link prediction objective function is turned off.</li> + </ul> + </li> + <li>At each DiffPool layer, the number of classes is set to 25% of the number of nodes before the DiffPool layer.</li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>DiffPool obtains the highest average performance across all the pooling approaches and improves upon the base GraphSage architecture by an average of around 7%.</p> + </li> + <li> + <p>In terms of runtime complexity, the paper reports that DiffPool does not incur any significant additional running time. But given that now there are 2 GNN models per layer, the size of the model should increase.</p> + </li> + <li> + <p>DiffPool can capture hierarchical community structure even when trained on just the graph classification loss.</p> + </li> + <li> + <p>One advantage of DiffPool is that the nodes are pooled in a non-uniform way so densely connected group of nodes would collapse into one cluster while sparsely connected nodes can retain their identity.</p> + </li> +</ul> + + + + + Imagination-Augmented Agents for Deep Reinforcement Learning + + 2018-08-08T00:00:00-04:00 + /site/2018/08/08/Imagination-Augmented Agents for Deep Reinforcement Learning + <ul> + <li> + <p>The paper presents I2A (Imagination Augmented Agent) that combines the model-based and model-free approaches leading to data efficiency and robustness even with imperfect models.</p> + </li> + <li> + <p>I2A agent uses the predictions from a learned environment model as an additional context in deep policy networks. This leads to improved data efficiency and robustness to imperfect models.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1707.06203">Link to the paper</a></p> + </li> + <li> + <p>I2A agent has two main modules - Imagination module and the Policy module.</p> + </li> + <li> + <p><strong>Imagination Module</strong></p> + + <ul> + <li><strong>Environment Model</strong> + <ul> + <li>This is a recurrent model, trained in an unsupervised manner using the agent trajectories. It can be used to predict the future state given the current state and action.</li> + <li>The environment model can be rolled out multiple times to obtain a simulated trajectory or an “imagined” trajectory.</li> + <li>During each rollout, the actions are chosen using a rollout policy π<sub>r</sub>.</li> + </ul> + </li> + <li><strong>Rollout Encoder</strong> + <ul> + <li>A rollout encoder <em>E</em> (LSTM) is used to process the entire imagined rollout.</li> + </ul> + </li> + <li>The imagination module is used to generate <em>n</em> trajectories. Each trajectory is a sequence of outputs of the environment model.</li> + <li>These <em>n</em> trajectories are concatenated into a single “imagination” vector.</li> + <li>The training data for the environment model is generated from trajectories of a partially trained model-free agent.</li> + <li>Pretraining the environment model (instead of joint training with policy) leads to faster runtime.</li> + </ul> + </li> + <li> + <p><strong>Policy Module</strong></p> + + <ul> + <li>This module uses the output of both model-based path and model-free path as its input. It generates the policy vector and value function.</li> + </ul> + </li> + <li><strong>Rollout Strategy</strong> + <ul> + <li>One rollout is performed for each possible action in the environment ie, the first action in the i<sup>th</sup> rollout is the i<sup>th</sup> action in the action set.</li> + <li>Subsequent actions are generated using a shared rollout policy π<sub>’</sub></li> + <li>An effective strategy was to create a small model-free network π<sub>’</sub>(o<sub>t</sub>) and then add a KL loss component that encourages π<sub>’</sub>(o<sub>t</sub>)to be similar to the imagination augmented policy π(o<sub>t</sub>).</li> + </ul> + </li> + <li><strong>Baselines</strong> + <ul> + <li>Model-free agent</li> + <li>Copy-model agent - same as I2A but the environment model is replaced by a “copy” model that just returns the input observations.</li> + </ul> + </li> + <li><strong>Environments</strong> + <ul> + <li>Sokoban + <ul> + <li>Task is to push a number of boxes onto given target locations.</li> + <li>I2A outperforms the baselines and gains in performance as the number of unrolling steps increases (though at a diminishing rate).</li> + <li>In case of poor environment models, the agent seems to be able to ignore the later part of the rollout when the error starts to accumulate.</li> + <li>Monte Carlo search algorithm (without an explicit rollout encoder) performed poorly as compared to the model using rollout encoder.</li> + <li>Predicting the reward along with value function and action seems to speed up training.</li> + <li>If a near-perfect model is available, I2A agent’s performance can be improved by performing Monte Carlo search with the trained I2A agent for the rollout policy. The agent plays entire episodes in simulation and tries to find a successful action sequence within 10 retries.</li> + </ul> + </li> + <li><strong>MiniPacman</strong> + <ul> + <li>I2A agent is evaluated to see if a single model can be used to solve multiple tasks.</li> + <li>A new environment is designed to define multiple tasks in an environment with shared state transitions.</li> + <li>Each task is specified by a 5-dimensional reward vector that associates a reward with moving, eating food, eating a pill, eating a ghost and being eaten by a ghost.</li> + <li>A single environment model is trained to predict both observations (frames) and events (eg “eating a pill”). This way, the environment model is shared across all tasks.</li> + <li>Baseline agents and I2As are trained on each task separately. I2A architecture outperforms the standard agent in all tasks and the copy-model +baseline in all but one task.</li> + <li>The improvement in performance is higher for tasks where rewards are sparse and where the anticipation +of ghost dynamics is especially important indicating that the I2A agent can use the environment model to explore the environment more effectively.</li> + </ul> + </li> + </ul> + </li> +</ul> + + + + + Kronecker Recurrent Units + + 2018-07-19T00:00:00-04:00 + /site/2018/07/19/Kronecker Recurrent Units + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Recurrent Neural Networks have two key issues:</p> + + <ul> + <li> + <p><strong>Over parameterization</strong> which increases the time for training and inference.</p> + </li> + <li> + <p><strong>Ill conditioned</strong> recurrent weight matrix which makes training difficult due to vanishing or exploding gradients.</p> + </li> + </ul> + </li> + <li> + <p>The paper presents a flexible RNN model called as KRU (Kronecker Recurrent Units) which overcomes the above problems by using a Kronecker factored recurrent matrix and soft unitary constraints on the factors.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1705.10142">Link to the paper</a></p> + </li> +</ul> + +<h2 id="related-work">Related Work</h2> + +<h3 id="existing-solutions-for-overparameterization">Existing solutions for overparameterization</h3> + +<ul> + <li> + <p>Low-rank decomposition.</p> + </li> + <li> + <p>Training a neural network on the soft targets predicted by a big pre-trained network.</p> + </li> + <li> + <p>Low-bit precision training.</p> + </li> + <li> + <p>Hashing.</p> + </li> +</ul> + +<h3 id="existing-solutions-for-vanishing-and-exploding-gradients">Existing solutions for vanishing and exploding gradients</h3> + +<ul> + <li> + <p>Gating mechanism like in LSTMs.</p> + </li> + <li> + <p>Gradient Clipping.</p> + </li> + <li> + <p>Orthogonal Weight Initialization.</p> + </li> + <li> + <p>Parameterizing recurrent weight matrix.</p> + </li> +</ul> + +<h2 id="kru">KRU</h2> + +<ul> + <li> + <p>Uses a Kronecker factored recurrent matrix which enables controlling the number of parameters and number of factor matrices.</p> + </li> + <li> + <p>Vanishing and exploding gradients are taken care of by using a soft unitary constraint.</p> + </li> + <li> + <p>Why not use strict unitary constraint:</p> + + <ul> + <li> + <p>Restricts the search space and makes learning process unstable.</p> + + <ul> + <li> + <p>Makes forgetting (irrelevant) information difficult.</p> + </li> + <li> + <p>Relaxing the strict constraint has shown to improve the convergence speed and generalization performance.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p>KRU can be easily plugged into RNNs, LSTMs and other variants.</p> + </li> + <li> + <p>The recurrent matrix <em>W</em> is paramterized as a kronecker product of <em>F</em> matrices <em>W<sub>0</sub>, …, W<sub>F-1</sub></em> where each <em>W<sub>f</sub></em> is a complex matrix of shape <em>P<sub>f</sub> x Q<sub>f</sub></em> and the product of all <em>P<sub>f</sub></em> and producto of all <em>Q<sub>f</sub></em> are both equal to <em>N</em>.</p> + </li> + <li> + <p>Why is <em>W</em> a complex matrix?</p> + + <ul> + <li> + <p>In the real space, the set of all unitary matrices have the determinant as 1 or -1.</p> + + <ul> + <li> + <p>Given that determinant is a continuous function, the unitary set in the real space is disconnected.</p> + </li> + <li> + <p>The unitary set in the complex space is connected as its determinants are points on the unit circle.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + +<h3 id="soft-unitary-constraint">Soft Unitary Constraint</h3> + +<ul> + <li> + <table> + <tbody> + <tr> + <td>A soft unitary constraint is introduced in the form of regularization term</td> + <td> </td> + <td>W<sub>f</sub><sup>H</sup>W<sub>f</sub> - I</td> + <td> </td> + <td><sup>2</sup> (per kronecker factored recurrent matrix).</td> + </tr> + </tbody> + </table> + </li> + <li> + <p>If each of the Kronecker factors is unitary, the resulting matrix <em>W</em> would also be unitary.</p> + </li> + <li> + <p>It is computationally inefficient to apply this constraint over the recurrent matrix <em>W</em> itself as the complexity of the regularizer is given as <em>O(N<sup>3</sup>)</em>.</p> + </li> + <li>Use of Kronecker factorisation makes it computationally feasible to use this regulariser.</li> +</ul> + +<h2 id="experiment">Experiment</h2> + +<ul> + <li> + <p>The Kronecker recurrent model is compared against the existing recurrent models for multiple tasks including copy memory, adding memory, pixel-by-pixel MNIST, char level language models, polyphonic music modelling, and framewise phoneme classification.</p> + </li> + <li> + <p>For most of the task, KRU model produces results comparable to the best performing models despite using fewer parameters.</p> + </li> + <li> + <p>Using soft unitary constraints in KRU provides a principled alternative to gradient clipping (a common heuristic to avoid exploding gradients).</p> + </li> + <li> + <p>Further, recent theoretical results suggest the gradient descent converges to a global optimizer of linear recurrent networks even if the learning problem is non-convex provided that the spectral norm of the recurrent matrix is bound by 1.</p> + </li> + <li> + <p>The key take away from the paper is that state should be high dimensional so that high capacity network can be used for encoding and decoding the input and output. The recurrent dynamics should be implemented via a low capacity model.s per task.</p> + </li> +</ul> + + + + + Learning Independent Causal Mechanisms + + 2018-07-11T00:00:00-04:00 + /site/2018/07/11/Learning Independent Causal Mechanisms + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a very interesting approach for learning independent (inverse) data transformation from a set of transformed data points in an unsupervised manner.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1712.00961">Link to the paper</a></p> + </li> +</ul> + +<h2 id="formulation">Formulation</h2> + +<ul> + <li> + <p>We start with a given data distribution <em>P</em> (say the MNIST dataset) where each x ε R<sup>d</sup>.</p> + </li> + <li> + <p>Consider N transformations M<sub>1</sub>, …, M<sub>N</sub> (functions that map input x to transformed input x’). Note that N need not be known before hand.</p> + </li> + <li> + <p>These transformations can be thought of as independent (from other transformations) causal mechanisms.</p> + </li> + <li> + <p>Applying these transformation would give N new distributions Q<sub>1</sub>, …, Q<sub>N</sub>.</p> + </li> + <li> + <p>These individual distributions are combined to form a single transformed distribution Q which contains the union of samples from the individual distributions.</p> + </li> + <li> + <p>At training time, two datasets are created. One dataset corresponds to untransformed objects (sampled from <em>P</em>), referred to as <em>D<sub>P</sub></em>. The other dataset corresponds to samples from the transformed distribution <em>Q</em> and is referred to as <em>D<sub>Q</sub></em>.</p> + </li> + <li> + <p>Note that all the samples in <em>D<sub>P</sub></em> and <em>D<sub>Q</sub></em> are sampled independently and no supervising information is needed.</p> + </li> + <li> + <p>A series of N’ parametric models, called as experts, are initialized and would be trained to learn the different mechanisms.</p> + </li> + <li> + <p>For simplicity, assume that N = N’. If N &gt; N’, some experts would learn more than one transformation or certain transformations would not be learnt. If N &lt; N’, some experts would not learn anything or some experts would learn the same distribution. All of these cases can be diagnosed and corrected by changing the number of experts.</p> + </li> + <li> + <p>The experts are trained with the goal of maximizing an objective parameter <em>c</em>: R<sup>d</sup> to R. <em>c</em> takes high values on the support of <em>P</em> and low values outside.</p> + </li> + <li> + <p>During training, an example x<sub>Q</sub> (from D<sub>Q</sub>) is fed to all the experts at the same time. Each expert produces a value <em>c<sub>j</sub> = c(E<sub>j</sub>(x<sub>Q</sub>))</em></p> + </li> + <li> + <p>The winning expert is the one whose output is the max among all the outputs. Its parameters are updated to maximise its output while the other experts are not updated.</p> + </li> + <li> + <p>This forces the best performing model to become even better and hence specialize.</p> + </li> + <li> + <p>The objective <em>c</em> comes from adversarial training where a discriminator network discriminates between the untransformed input and the output of the experts.</p> + </li> + <li> + <p>Each expert can be thought of as a GAN that conditions on the input x<sub>Q</sub> (and not on a noise vector). The output of the different experts is fed to the discriminator which provides both a selection mechanism and the gradients for training the experts.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Experiments are performed on the MNIST dataset using the transformations like translation along 4 directions and along 4 diagonals, contrast shift and inversion.</p> + </li> + <li> + <p>The discriminator is further trained against the output of all the losing experts thereby furthering strengthing the winning expert.</p> + </li> +</ul> + +<h3 id="approximate-identity-initialization">Approximate Identity Initialization</h3> + +<ul> + <li> + <p>The experts are initialized randomly and then pretrained to approximate the identity function by training with identical input-output pairs.</p> + </li> + <li> + <p>This ensures that the experts start from a similar level.</p> + </li> + <li> + <p>In practice, it seems necessary for the success of the proposed approach.</p> + </li> +</ul> + +<h3 id="observations">Observations</h3> + +<ul> + <li> + <p>During the initial phase, there is a heavy competition between the experts and eventually different winners emerge for different transformations.</p> + </li> + <li>The approximate quality of reconstructed output was also evaluated using a downstream task. + <ul> + <li>3 type of inputs were created: + <ul> + <li>Untransformed images</li> + <li>Transformed images</li> + <li>Transformed images a being processed by experts.</li> + </ul> + </li> + <li>These inputs are fed to a pretrained MNISTN classifier.</li> + <li>The classifier performs poorly on the transformed images while the performance for images processed by experts quickly catches up with the performance on untransformed images.</li> + </ul> + </li> + <li>The experts E<sub>i</sub> generalize on the data points from a different dataset as well. + <ul> + <li>To test the generalisation capabilities of the expert, a sample of data from the omniglot dataset is transformed and fed to experts (which are trained only on MNIST).</li> + <li>Each expert consistently applies the same transformation even though the inputs are outside the training domain.</li> + <li>This suggests that the experts have generalized to different transformations irrespective of the underlying dataset.</li> + </ul> + </li> +</ul> + +<h2 id="comments">Comments</h2> + +<ul> + <li> + <p>The experiments are quite limited in terms of complexity of dataset and complexity of transformation but it provides evidence for a promising connection between deep learning and causality.</p> + </li> + <li> + <p>Appendix mentions that in case there are too many experts, for most of the tasks, only one model specialises and the extra experts do not specialize at all. This is interesting as there is no explicit regularisation penalty which prevents the emergence of multiple experts per task.</p> + </li> +</ul> + + + + + Memory-based Parameter Adaptation + + 2018-07-04T00:00:00-04:00 + /site/2018/07/04/Memory-Based Parameter Adaption + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Standard Deep Learning networks are not suitable for continual learning setting as the change in the data distribution leads to catastrophic forgetting.</p> + </li> + <li> + <p>The paper proposes Memory-based Parameter Adaptation (MbPA), a technique that augments a standard neural network with an episodic memory (containing examples from the previous tasks).</p> + </li> + <li> + <p>This episodic memory allows for rapid acquisition of new knowledge (corresponding to the current task) while preserving performance on the previous tasks.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1802.10542">Link to the paper</a></p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>MbPA consists of 3 components:</p> + + <ul> + <li>Embedding Network <em>f</em></li> + <li>Memory <em>M</em></li> + <li>Output network <em>g</em></li> + </ul> + </li> + <li> + <p><em>f</em> and <em>g</em> are parametric components while <em>M</em> is a non-parametric component.</p> + </li> + <li> + <p><em>M</em> is a dynamically sized dictionary where the key represents the output of the embedding network and the value represents the desired output for a given input (input to the model).</p> + </li> + <li> + <p>When a new training tuple (x<sub>j</sub>, y<sub>j</sub>) is fed as input to the model, a key-value pair (h<sub>j</sub>, v<sub>j</sub>) is added to the memory. h<sub>j</sub> = f(x<sub>j</sub>)</p> + </li> + <li> + <p>The memory has a fixed size and acts as a circular buffer. When it gets filled up, earlier examples are dropped.</p> + </li> + <li> + <p>When accessing the memory using a key <em>h<sub>key</sub></em>, the k-nearest neighbours (in terms of distance from the given key) are retrieved.</p> + </li> +</ul> + +<h2 id="training-phase">Training Phase</h2> + +<ul> + <li>During the training phase, the memory is only used to store the input examples and does not interfere with the training procedure.</li> +</ul> + +<h2 id="testing-phase">Testing Phase</h2> + +<ul> + <li> + <p>During testing, the memory is used to adapt the parameters of the output network <em>g</em> while the embedding network <em>f</em> remains the same.</p> + </li> + <li> + <p>Given the input x, obtain the embedding corresponding to x and using that as the key, retrieve the k-nearest neighbours from the memory.</p> + </li> + <li> + <p>Each retrived neighbour is a tuple of the form (h<sub>k</sub>, v<sub>k</sub>, w<sub>k</sub>) where w<sub>k</sub> is propotional to the closeness between the input query and the key corresponding to the retrived example.</p> + </li> + <li> + <p>The collection of all the retrieved examples are referred to as the context <em>C</em>.</p> + </li> + <li> + <p>The parameters of the output network <em>g</em> are adapted from θ to θ<sub>x</sub> where θ<sub>x</sub> = θ + δ<sub>M</sub>(x, θ)</p> + </li> + <li> + <p>δ<sub>M</sub>(x, θ) is referred to as the contextual update of parameters of the output network.</p> + </li> +</ul> + +<h2 id="interpretation-of-mbpa">Interpretation of MbPA</h2> + +<ul> + <li> + <p>MbPA can be interpreted as decreasing the weighted average of negative log likelihood over the retrieved neighbours in the context C.</p> + </li> + <li> + <p>The expression corresponding to δ<sub>M</sub>(x, θ) can be obtained by performing gradient descent to minimise the max a posterior over the context C.</p> + </li> + <li> + <p>The a posterior expression can be written as a sum of two terms - one corresponding to a weighted likelihood of data in the context C and the other corresponding to a regularisation term to prevent overfitting the data.</p> + </li> + <li> + <p>This idea can be thought of as a generalisation of attention. Attention can be viewed as fitting a constant function over the neighbourhood of memories while MbPA fits a more general function which is parameterised by the output network of the given model. Refer appendix E in the paper for further details.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>MbPA aims to solve the fundamental problem of enabling the model to deal with changes in data distribution.</p> + </li> + <li> + <p>In that sense, it is evaluated on a wide range of settings: continual learning, incremental learning, unbalanced datasets and change in data distribution at test time.</p> + </li> + <li> + <p>Continual Learning:</p> + + <ul> + <li> + <p>In this setting, the model encounters a sequence of tasks and cannot revisit a previous task.</p> + </li> + <li> + <p>Permuted MNIST dataset was used.</p> + </li> + <li> + <p>The key takeaway is that once a task is catastrophically forgotten, only a few gradient updates on a carefully selected data, are sufficient to recover the performance.</p> + </li> + </ul> + </li> + <li> + <p>Incremental Learning:</p> + + <ul> + <li> + <p>In this setting, the model is trained on a subset of classes and then introduced to novel, unseen classes. The model is tested to see if it can incorporate the new knowledge while retaining the knowledge about the previous classes.</p> + </li> + <li> + <p>Imagenet dataset with Resnet V1 model is used. It is first pretrained on 500 classes and then fine-tuned to see how quickly could it adapt to new classes.</p> + </li> + </ul> + </li> + <li> + <p>Unbalanced Dataset:</p> + + <ul> + <li>This setting is similar to the incremental learning setting with the key difference that once the model has been trained on a part of the dataset and is to be finetuned to acquire new knowledge, the dataset used for finetuning is much smaller than the initial dataset thus creating the effect of unbalanced datasets.</li> + </ul> + </li> + <li> + <p>Language Modelling:</p> + + <ul> + <li>MbPA is used to adapt to the shift in the word distribution that is common to language modelling tasks. PTB and WikiText datasets were used.</li> + </ul> + </li> + <li> + <p>MbPA exhibits strong performance on all these tasks showing that the memory-based parameter adaption technique is effective across a range of tasks in supervised learning.</p> + </li> +</ul> + + + + + Born Again Neural Networks + + 2018-06-09T00:00:00-04:00 + /site/2018/06/09/Born Again Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper explores knowledge distillation (KD) from the perspective of transferring knowledge between 2 networks of identical capacity.</p> + </li> + <li> + <p>This is in contrast to much of the previous work in KD which has focused on transferring knowledge from a larger network to a smaller network.</p> + </li> + <li> + <p>The paper reports that these Born Again Networks (BANs) outperform their teachers by significant margins in many cases.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1805.04770">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li>The standard KD setting is as follows: + <ul> + <li>Start with an untrained network (or ensemble of networks) and train them for the given task. This network is referred to as the teacher network.</li> + <li>Now start with another untrained network (generally of smaller size than the teacher network) and train it using the output of the teacher network. This network is referred to as the student network.</li> + </ul> + </li> + <li> + <p>The paper augments this setting with an extra cross-entropy loss between the output of the teacher and the student networks. The student tried to predict the correct answer while matching the output distribution of the teacher.</p> + </li> + <li> + <p>The resulting student network is referred to as BAN - Born Again Network.</p> + </li> + <li> + <p>The same approach can be used multiple times (with diminishing returns) where the kth generation student is initialized by knowledge transfer from (k-1)th generation student.</p> + </li> + <li>The output of multiple generation BANs are combined via averaging to produce BANE (Born Again Network Ensemble).</li> +</ul> + +<h2 id="dark-knowledge">Dark Knowledge</h2> + +<ul> + <li> + <p><a href="https://shagunsodhani.in/papers-I-read/Distilling-the-Knowledge-in-a-Neural-Network">Hinton et al</a> suggested that even when the output of the teacher network is incorrect, it contains useful information about the similarity between the output classes. This information is referred to as the “dark knowledge”.</p> + </li> + <li> + <p>The current paper observed that the gradient of the correct output dimension during distillation and normal supervised training resembles the original gradient up to a weight factor. This sample specific weight is defined by the value of the teacher’s max output.</p> + </li> + <li> + <p>This suggests distillation may be performing some kind of importance weighing. To explore this further, the paper considers 2 cases:</p> + + <ul> + <li> + <p>Confidence Weighted By Teacher Max (CWTM) - where each example in the student’s loss function is weighted by the confidence that the teacher has on the prediction for that sample. The student incurs a higher loss if the teacher was more confident about the example.</p> + </li> + <li> + <p>Dark Knowledge with Permuted Predictions (DKPP) - The non-argmax output of teacher’s predictive distribution are permuted thus destroying the information about which output classes are related.</p> + </li> + </ul> + </li> + <li> + <p>The key effect of these variations is that the covariance between the output classes is lost and classical knowledge distillation would not be sufficient to explain improvements (if any).</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="image-data">Image Data</h3> + +<ul> + <li>Datasets + <ul> + <li>CIFAR10</li> + <li>CIFAR100</li> + </ul> + </li> + <li>Baselines + <ul> + <li>ResNets</li> + <li>DenseNets</li> + </ul> + </li> + <li>BAN Variants + <ul> + <li>BAN-DenseNet and BAN-ResNet - Train a sequence of 2 or 3 BANs using DenseNets and ResNets. Different variants constrain BANs to be similar to their teacher or penalize l2-distance between student and teacher activations etc.</li> + <li>Two settings with CWTM and DKPP as explained earlier.</li> + <li>BAN-Resnet with DenseNet teacher and BAN-DenseNet with ResNet teacher</li> + </ul> + </li> +</ul> + +<h3 id="text-data">Text Data</h3> + +<ul> + <li>Datasets: + <ul> + <li>PTB Dataset</li> + </ul> + </li> + <li>Baselines + <ul> + <li>CNN-LSTM model</li> + </ul> + </li> + <li>BAN Variant + <ul> + <li>LSTM</li> + </ul> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li>BAN student models improved over their teachers in most of the configurations.</li> + <li>Training BANs across multiple generations leads to saturating improvements.</li> + <li>The student models exhibit improvements even in the control settings (CWTM and DKPP). + <ul> + <li>One reason could be that the permutation procedure did not remove the higher order moments of output distribution.</li> + <li>Improvements in the CWTM model suggests that the pre-trained models can be used to rebalance the training set by giving lesser weight for samples where the teacher’s output distribution is more spread.</li> + </ul> + </li> +</ul> + + + + + + Net2Net-Accelerating Learning via Knowledge Transfer + + 2018-05-21T00:00:00-04:00 + /site/2018/05/21/Net2Net - Accelerating Learning via Knowledge Transfer + <h2 id="notes">Notes</h2> + +<ul> + <li> + <p>The paper presents a simple yet effective approach for transferring knowledge from a trained neural network (referred to as the teacher network) to a large, untrained neural network (referred to as the student network).</p> + </li> + <li> + <p>The key idea is to use a function-preserving transformation that guarantees that for any given input, the output from the teacher network and the newly created student network would be the same.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1511.05641">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/paengs/Net2Net">Link to an implementation</a></p> + </li> + <li> + <p>The approach works as follows - Let us say that the teacher network was represented by the transformation <em>y = f(x, θ)</em> where <em>θ</em> refer to the parameters of the network. The task is to choose a new set of parameters <em>θ’</em> for the student network <em>g(x, θ’)</em> such that for all <em>x, f(x, θ) = g(x, θ’)</em></p> + </li> + <li> + <p>To start, we can assume that <em>f</em> and <em>g</em> are composed of standard linear layers. Layer <em>i</em> and <em>i+1</em> are represented by weights <em>W<sub>mxn</sub><sup>i</sup></em> and <em>W<sub>nxp</sub><sup>i+1</sup></em></p> + </li> + <li> + <p>We want to grow layer <em>i</em> to have <em>q</em> output units (where <em>q</em> &gt; <em>n</em>) and layer <em>i+1</em> to have <em>q</em> input units. The new weight matrix would be <em>U<sub>mxq</sub><sup>i</sup></em> and <em>U<sub>qxp</sub><sup>i+1</sup></em></p> + </li> + <li> + <p>The first <em>q</em> columns (rows) of <em>W<sup>i</sup></em> (<em>W<sup>i+1</sup></em>) would be copied as it is into <em>U<sup>i</sup></em>(<em>U<sup>i+1</sup></em>).</p> + </li> + <li> + <p>For filling the remaining <em>n-q</em> slots, columns (rows) would be sampled randomly from <em>W<sup>i</sup></em> (<em>W<sup>i+1</sup></em>).</p> + </li> + <li> + <p>Finally, each layer in <em>U<sup>i</sup></em> is scaled by dividing by the corresponding replication factor to ensure that the output value of function remains unchanged by the operation.</p> + </li> + <li> + <p>Since convolutions can be seen as multiplication by a double block circulant matrix, the approach can be readily extended for convolutional networks.</p> + </li> + <li> + <p>The benefits of using this approach are the following:</p> + + <ul> + <li>The newly created student network performs at least as good as the teacher network.</li> + <li>Any changes to the network are guaranteed to be an improvement.</li> + <li>It is safe to optimize all the parameters in the network.</li> + </ul> + </li> + <li> + <p>The variant discussed above is called the <strong>Net2WiderNet</strong> variant. There is another variant called<strong>Net2DeeperNet</strong> that enables the network to grow in depth.</p> + </li> + <li> + <p>In that case, a new matrix, <em>U</em>, initialized as the identity matrix, is added to the network. Note that unlike the <strong>Net2WiderNet</strong>, this approach would not work with arbitrary activation function between the layers.</p> + </li> +</ul> + +<h2 id="strengths">Strengths</h2> + +<ul> + <li> + <p>The model can accelerate the training of neural networks, especially during development cycle when the designers try out different models.</p> + </li> + <li> + <p>The approach could potentially be used in life-long learning systems where the model is trained over a stream of data and needs to grow over time.</p> + </li> +</ul> + +<h2 id="limitations">Limitations</h2> + +<ul> + <li>The function preserving transformations need to be worked out manually. Extra care needs to be taken when operations like concatenation or batch norm are present.</li> +</ul> + + + + + Learning to Count Objects in Natural Images for Visual Question Answering + + 2018-05-06T00:00:00-04:00 + /site/2018/05/06/Learning to Count Objects in Natural Images for Visual Question Answering + <h2 id="introduction">Introduction</h2> + +<ul> + <li>Most of the visual question-answering (VQA) models perform poorly on the task of counting objects in an image. The main reasons are: + <ul> + <li>Most VQA models use a soft attention mechanism to perform a weighted sum over the spatial features to obtain a single feature vector. These aggregated features helps in most category of questions but seems to hurt for counting based questions.</li> + <li>For the counting questions, we do not have a ground truth segmentation of where the objects to be counted are present on the image. This limits the scope of supervision.</li> + </ul> + </li> + <li> + <p>Additionally, we need to ensure that any modification in the architecture, to enhance the performance on the counting questions, should not degrade the performance on other classes of questions.</p> + </li> + <li> + <p>The paper proposes to overcome these challenges by using the attention maps (and not the aggregated feature vectors) as input to a separate <strong>count</strong> module.</p> + </li> + <li><a href="https://arxiv.org/abs/1802.05766">Link to the paper</a></li> +</ul> + +<h2 id="notes">Notes</h2> + +<p>The basic idea is quite intuitive: when we perform weighted averaging based on different attention maps, we end up averaging the features corresponding to the difference instances of an object. This makes the feature vectors indistinguishable from the scenario where we had just one instance of the object in the image.</p> + +<p>Even multiple glimpses (multiple attention steps) can not resolve this problem as the weights given to one feature vector would not depend on the other feature vectors (that are attended to). Hard attention could be more useful than soft-attention but there is not much empirical evidence in support of this hypothesis.</p> + +<p>The proposed <strong>count</strong> module is a separate pipeline that can be integrated with most of the existing attention based VQA models without affecting the performance on non-count based questions.</p> + +<p>The inputs to the <strong>count</strong> module are the attention maps and the object proposals (coming from some pre-trained model like the RCNN model) and the output is an count-feature vector which is used to answer the count based question.</p> + +<p>The top level idea is the following - given the object proposals and the attention maps, create a graph where nodes are objects (object proposals) and edges capture how similar two object proposals are (how much do they overlap). The graph is transformed (by removing and scaling edges) so that the count of the object can be obtained easily.</p> + +<p>To explain their methodology, the paper simplifies the setting by making two assumptions:</p> +<ul> + <li>The first assumption is that the attention weights are either 1 (when the object is present in the proposal) or 0 (when the object is absent from the proposal).</li> + <li>The second assumption is that any two object proposals either overlap completely (in which case, they are corresponding to the exact same object and hence receive the exact same weights) or the two proposals have zero overlap (in which case, they must be corresponding to completely different objects).</li> +</ul> + +<p>These simplifying assumptions are made only for the sake of exposition and do not limit the capabilities of the <strong>count</strong> module.</p> + +<p>Given the assumptions, the task of the count module is to handle the exact duplicates to prevent double-counting of objects.</p> + +<p>As the first step, the attention weights (<strong>a</strong>) are used to generate an attention matrix (<strong>A</strong>) by performing an outer product between <strong>a</strong> and <strong>a<sup>T</sup></strong>. This corresponds to the step of creating a graph from the input.</p> + +<p><strong>A</strong> corresponds to the adjacency matrix of that graph. The attention weight for the <em>i<sup>th</sup></em> proposal corresponds to the <em>i<sup>th</sup></em> node in the graph and the edge between the nodes <em>i</em> and <em>j</em> has the weight <strong>a<sub>i</sub>*a<sub>j</sub></strong>.</p> + +<p>Also note that the graph is a weighted directed graph and the subgraph of vertices satisfying the condition <strong>a<sub>i</sub></strong> = 1 is a complete directed graph with self-loops. Given such a graph, the number of vertices, <em>V = sqrt(E)</em> where <em>E</em> could be computed by summing over the adjacency matrix.This implies that if the proposals are distinct, then the count can be obtained trivially by performing a sum over the adjacency matrix.</p> + +<p>The objective is now to eliminate the edges such that the underlying objects are the vertices of a complete subgraph. This requires removing two type of duplicate edges - intra-object edges and inter-object edges.</p> + +<p>Intra-object edges can be removed by computing a distance matrix, <strong>D</strong>, defined as 1 - IoU, where IoU matrix corresponds to the Intersection-over-Union matrix. A modified adjacency matrix <strong>A’</strong> is obtained by performing the element-wise product between f<sub>1</sub>(<strong>A</strong>) and f<sub>2</sub>(<strong>D</strong>) where f<sub>1</sub> and f<sub>2</sub> are piece-wise linear functions that are learnt via backpropogation.</p> + +<p>The inter-object edges are removed in the following manner:</p> + +<ul> + <li>Count the number of proposals that correspond of each instance of an object and then scale down the edges corresponding to the different instances by that number.</li> + <li>This creates the effect of reducing the weights of multiple proposals equivalent to a single proposal.</li> + <li>The number of proposals corresponding to an object is not available as an annotation in the training pipeline and is estimated based on the similarity between the different proposals (measured via the attention weights <strong>a</strong>, adjacency matrix <strong>A</strong> and distance matrix <strong>D</strong>).</li> + <li>The matrix corresponding to the similarity between proposals (<strong>sim<sub>i, j</sub></strong>) is transformed into a vector corresponding to the scaling factor of each node (<strong>s<sub>i</sub></strong>)</li> +</ul> + +<p><strong>s</strong> can be converted into a matrix (by doing outer-product with itself) so as to scale both the incoming and the outgoing edges. The self edges (which were removed while computing <strong>A’</strong> are added back (after scaling with <strong>s</strong>) to obtain a new transformed matrix <strong>C</strong>.</p> + +<p>The transformed matrix <strong>C</strong> is a complete graph with self-loops where the nodes corresponds to all the relevant object instances and not to object proposals. The actual count can be obtained from <strong>C</strong> by performing a sum over all its values as described earlier. The original count problem was a regression problem but it is transformed into a classification problem to avoid scale issues. The network produces a <strong>k</strong>-hot <strong>n</strong>-dimensional vector called <strong>o</strong> where <strong>n</strong> is the number of object proposals that were feed into the module (and hence the upper limit on upto how large a number could the module count). In the ideal setting, <strong>k</strong> should be one, as the network would produce an integer value but in practice, the network produces a real number so <strong>k</strong> can be upto 2. If <strong>c</strong> is an exact integer, the output is a 1-hot vector with the value in index corresponding to <strong>c</strong> set to 1. If <strong>c</strong> is a real number, the output is a linear interpolation between two one-hot vectors (the one-hot vectors correspond to the two integers between which <strong>c</strong> lies).</p> + +<p><strong>count</strong> module supports computing the confidence of a prediction by defining two variables p<sub><strong>a</strong></sub> and p<sub><strong>D</strong></sub> which compute the average distance of f<sub>6</sub>(<strong>a</strong>) and $f<sub>7</sub>(<strong>D</strong>) from 0.5. The final output <strong>o’</strong> is defined as f<sub>8</sub>(p<sub><strong>a</strong></sub> + p<sub><strong>D</strong></sub>) . <strong>o</strong></p> + +<p>All the different f functions are piece wise linear functions and are learnt via backpropagation.</p> + +<h2 id="experiments">Experiments</h2> + +<p>The authors created a new category of count-based questions by filtering the number-type questions to remove questions like “What is the time right now”. These questions do have a neumerical answer but do not fall under the purview of count based questions and hence are not targeted by the <strong>count</strong> model.</p> + +<p>The authors augmented a state of the art <a href="https://arxiv.org/abs/1704.03162">VQA model</a> with their <strong>count</strong> module and show substantial gains over the count-type questions for the <a href="https://arxiv.org/abs/1612.00837">VQA-v2 dataset</a>. This augmentation does not drastically impact the performance on non-count questions.</p> + +<p>The overall idea is quite crisp and intutive and the paper is easy to follow. It would be even better if there were some more abalation studies. For example, why are the piece-wise linear functions assumed to have 16 linear components? Would a smaller or larger number be better?</p> + + + + + Neural Message Passing for Quantum Chemistry + + 2018-04-08T00:00:00-04:00 + /site/2018/04/08/Neural Message Passing for Quantum Chemistry + <h1 id="introduction">Introduction</h1> + +<ul> + <li> + <p>The paper presents a general message passing architecture called as Message Passing Neural Networks (MPNNs) that unify various existing models for performing supervised learning on molecules.</p> + </li> + <li> + <p>Variants of the MPNN model achieve very good performance on the task of predicting the property of the molecules.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1704.01212">Link to the paper</a></p> + </li> +</ul> + +<h1 id="mpnn">MPNN</h1> + +<h2 id="setting">Setting</h2> + +<ul> + <li> + <p>The input to the model is an undirected graph <em>G</em> where node features are represented as <em>x<sub>v</sub></em> (corresponding to node <em>v</em>) and edge features are <em>e<sub>v, w</sub></em> (corresponding to edge between nodes <em>v, w</em>).</p> + </li> + <li> + <p>The idea is to learn a representation (or feature vector) for all the nodes (and possibly edges) in the graph and use that for the downstream supervised learning task.</p> + </li> + <li> + <p>The model can be easily extended to the setting of directed graphs.</p> + </li> + <li> + <p>The model works in 2 phases:</p> + </li> +</ul> + +<h2 id="message-passing-phase">Message Passing Phase</h2> + +<ul> + <li> + <p>All nodes send a <em>message</em> to their neighbouring nodes. The message is a function of the feature vectors corresponding to the sender node (or vertex), the receiver node and the edge connecting the two nodes. The feature vectors can be combined to form the message using the <em>message function</em> which can be implemented as a neural network.</p> + </li> + <li> + <p>Once a node has received messages from all its neighbours, it updated its feature vector by aggregating all the message. The function used to aggregate and update the feature vector is called as the <em>update function</em> and can be implemented as a neural network.</p> + </li> + <li> + <p>After updating the feature vectors, the graph could initiate another round of message passing. After a sufficient number of message passing rounds, the Readout phase is invoked.</p> + </li> +</ul> + +<h2 id="readout-phase">Readout Phase</h2> + +<ul> + <li> + <p>The feature vectors corresponding to different nodes in the graph are aggregated into a single feature vector (corresponding to the feature vector of the graph) using the <em>readout function</em>.</p> + </li> + <li> + <p>The <em>readout function</em> can also be implemented using a neural network with the condition that it is invariant to the permutation of the nodes within the graph (to ensure that the MPNN is independent of the graph isomorphism).</p> + </li> +</ul> + +<h1 id="existing-variants-in-literature">Existing Variants in literature</h1> + +<ul> + <li>The paper provides various examples where the existing architectures could be explained in terms of the message passing framework. This includes examples like <a href="https://arxiv.org/abs/1509.09292">Convolutional Networks on Graphs for Learning Molecular Fingerprints</a>, <a href="https://arxiv.org/abs/1511.05493"> +Gated Graph Sequence Neural Networks</a>, <a href="http://tkipf.github.io/graph-convolutional-networks/">Graph Convolutional Networks</a> etc.</li> +</ul> + +<h1 id="experiments">Experiments</h1> + +<h2 id="setup">Setup</h2> + +<ul> + <li> + <p>Broadly speaking, the task is to predict the properties of given molecules (regression problem).</p> + </li> + <li> + <p>The QM9 dataset consists of 130K molecules whose properties have been measured using Quantum Mechanical Simulations (DFT).</p> + </li> + <li> + <p>Properties to be predicted include atomization energy, enthalpy, highest fundamental vibrational frequency etc.</p> + </li> + <li> + <p>There are two benchmarks for error:</p> + + <ul> + <li> + <p>DFT Error - Estimated average error of DFT approximation</p> + </li> + <li> + <p>Chemical Accuracy - As established by the chemistry community</p> + </li> + </ul> + </li> +</ul> + +<h2 id="model">Model</h2> + +<ul> + <li> + <p>Following variants of <em>message function</em> are explored:</p> + + <ul> + <li> + <p>Matrix multiplication between <em>A<sub>evw</sub></em> and <em>h<sub>v</sub></em> where <em>A</em> is the adjacency matrix <em>h<sub>v</sub></em> is the feature corresponding to node <em>v</em>.</p> + </li> + <li> + <p>Edge Network which is same as matrix multiplication case with the difference that <em>A</em> is a learned matrix for each edge type.</p> + </li> + <li> + <p>Pair Network where the feature vector corresponding to the source node, target node and edge is fed to a neural network.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="virtual-elements">Virtual Elements</h2> + +<ul> + <li> + <p>Since all messages are shared via edges, it could take a long time for the message to move between two ends of the graph. To fasten this process, virtual elements are provided.</p> + </li> + <li> + <p>In the first setting, “virtual edges” are inserted between nodes.</p> + </li> + <li> + <p>In the second setting, a “master” node connects to all the other nodes.</p> + </li> +</ul> + +<h2 id="message-passing-complexity">Message Passing Complexity</h2> + +<ul> + <li> + <p>In a graph with <em>n</em> nodes and <em>d</em> dimensional feature vectors, a single step of message passing would have the worst case time complexity of <em>O(n<sup>2</sup>d<sup>2</sup></em>.</p> + </li> + <li> + <p>This complexity can be reduced by breaking the <em>d</em> dimensional embedding into <em>k</em> different groups of <em>d/k</em> embeddings which can be updated in parallel. The complexity of the modified approach is <em>O(n<sup>2</sup>d<sup>2</sup>/k</em>.</p> + </li> +</ul> + +<h1 id="results">Results</h1> + +<ul> + <li> + <p>Best performing MPNN model uses edge network as the <em>message function</em> and <a href="https://arxiv.org/abs/1511.06391">set2set</a> as the <em>readout function</em>.</p> + </li> + <li> + <p>Using group of embeddings helps to improve generalization. This effect could also be because of ensemble-like nature of the modified architecture.</p> + </li> + <li> + <p>The model performs worse without the virtual elements.</p> + </li> +</ul> + +<h1 id="takeaways">Takeaways</h1> + +<ul> + <li> + <p>Long range interaction between vertices is necessary.</p> + </li> + <li> + <p>Scaling to larger molecule sizes is challenging because the model creates a fully connected graph by incorporating virtual elements.</p> + </li> +</ul> + + + + + Unsupervised Learning by Predicting Noise + + 2018-04-02T00:00:00-04:00 + /site/2018/04/02/Unsupervised Learning By Predicting Noise + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Convolutional Neural Networks are extremely good feature extractors in the sense that features extracted for one task (say image classification) can be easily transferred to another task (say image segmentation).</p> + </li> + <li> + <p>Existing unsupervised approaches do not aim to learn discriminative features and supervised approaches for discriminative features do not scale well.</p> + </li> + <li> + <p>The paper presents an approach to learn features in an unsupervised setting by using a set of target representations called as Noise As Target (NAT) which acts as a kind of proxy supervising signal.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1704.05310">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<h3 id="unsupervised-setting">Unsupervised Setting</h3> + +<ul> + <li>Given a collection of image X (x<sub>1</sub>, x<sub>2</sub>, …, x<sub>n</sub>), we want to learn a parameterized mapping <em>f</em> such that <em>f(x<sub>i</sub>)</em> gives the features of image <em>x<sub>i</sub></em>. We would jointly learn the target vectors <em>y<sub>i</sub></em> (more on it later).</li> +</ul> + +<h3 id="loss-function">Loss Function</h3> + +<ul> + <li>Squared L2 norm is used as the distance measure while making sure that final activations are unit normalized.</li> +</ul> + +<h3 id="fixed-target-representation">Fixed Target Representation</h3> + +<ul> + <li> + <p>In the setting of the problem where we are learning both the features and the target representation, a trivial solution would be the one where all the input images map to the same target and are assigned the same representation. No discriminative features are learned in this case.</p> + </li> + <li> + <p>To avoid such situations, a set of k predefined target representations are chosen and each image is mapped to one of these k representations (based on the features).</p> + </li> + <li> + <p>There is an assumption that k &gt; n so that each image is assigned a different target.</p> + </li> + <li> + <p>One simple choice of target representation is the standard one-hot vector which implies that all the class (and by extension, the associated images) are orthogonal and equidistant from each other. But this is not a reasonable approximation as not all the image pairs are equally similar or dissimilar.</p> + </li> + <li> + <p>Instead, the target vectors are uniformly sampled from a d-dimensional unit sphere, where d is the dimensionality of the feature representation. That is, the idea is to map the features to the manifold of the d-dimensional L2 sphere by using the K predefined representations as for the discrete approximation of the manifold.</p> + </li> + <li> + <p>Since each data point (image) is mapped to a new point on the manifold, the algorithm is suited for online training as well.</p> + </li> +</ul> + +<h3 id="optimisation">Optimisation</h3> + +<ul> + <li> + <p>For the training, the number of target K is reduced to the number of images n and an assignment matrix P is learned which ensures that the mapping between the image to target is 1-to-1.</p> + </li> + <li> + <p>The resulting optimisation equation can be solved using the Hungarian Algorithm but at a high-cost O(n^3). An optimisation is to take a batch of b images and update the square matrix P<sub>B</sub> for dimension bXb (made of the images and their corresponding targets). This reduces the overall complexity of O(nb^2).</p> + </li> + <li> + <p>Other optimisation techniques, that are common to supervised learning, like batch norm used in this setting as well.</p> + </li> +</ul> + +<h3 id="implementation-detail">Implementation Detail</h3> + +<ul> + <li> + <p>Used AlexNet with NATs to train the unsupervised model.</p> + </li> + <li> + <p>An MLP is trained on these features to learn the classifier.</p> + </li> + <li> + <p>Standard preprocessing techniques like random cropping/flipping are used.</p> + </li> +</ul> + +<h3 id="experimental-details">Experimental Details</h3> + +<ul> + <li> + <p>Dataset</p> + + <ul> + <li> + <p>ImageNet for training the AlexNet architecture with the proposed approach.</p> + </li> + <li> + <p>Pascal VOC 2007 for transfer learning experiments.</p> + </li> + </ul> + </li> + <li> + <p>Baselines</p> + + <ul> + <li> + <p>Unsupervised approaches like autoencoder, GAN, BiGAN</p> + </li> + <li> + <p>Self-supervised</p> + </li> + <li> + <p>SOTA models using hand-made features SIFT with Fisher Vector.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="observation">Observation</h2> + +<ul> + <li> + <p>Using squared loss instead of softmax does not deteriorate the performance too much.</p> + </li> + <li> + <p>The authors compare the effect of using discrete vs continuous target representations for transfer learning. For the discrete representation, elements of the canonical basis of a k-dimensional space (k=1000, 10000, 100000) are used. Experiments demonstrate that d-dimensional continuous vectors perform much better than the discrete vectors.</p> + </li> + <li> + <p>While training the unsupervised network, its features were extracted after every 20 iterations to evaluate the performance on transfer learning task. The test accuracy increases up to around 100 iterations then saturate.</p> + </li> + <li> + <p>Comparing the visualization of the first convolutional layer filters (for AlexNet with and without supervision) shows that while unsupervised filters are less sharp, they maintain the edge and orientation information.</p> + </li> + <li> + <p>The proposed unsupervised method outperforms all the unsupervised baselines and is competitive with respect to the supervised baseline. But it is still far behind the model using handcrafted features.</p> + </li> + <li> + <p>For transfer learning, on Pascal VOC, the proposed approach beats the supervised baseline and works at par with the supervised approach.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li> + <p>The paper proposed a simple unsupervised framework for learning discriminative features without having to rely on proxy tasks like image generation and without having to make an assumption about the input domain.</p> + </li> + <li> + <p>The key aspect of the proposed approach is that each image is assigned to a unique point in the d-dimensional manifold which means 2 images could be very close to each other on the manifold while being quite distinct in reality. It is interesting to see that such a simple strategy is able to give such good results.</p> + </li> +</ul> + + + + + The Lottery Ticket Hypothesis - Training Pruned Neural Networks + + 2018-03-25T00:00:00-04:00 + /site/2018/03/25/The Lottery Ticket Hypothesis - Training Pruned Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Empirical evidence indicates that at training time, the neural networks need to be of significantly larger size than necessary.</p> + </li> + <li> + <p>The paper purposes a hypothesis called the <em>lottery ticket hypothesis</em> to explain this behaviour.</p> + </li> + <li> + <p>The idea is the following - Successful training of a neural network depends on a <em>lucky</em> random initialization of a subcomponent of the network. Such components are referred to as <em>lottery tickets</em>.</p> + </li> + <li> + <p>Larger networks are more likely to have these <em>lottery tickets</em> and hence are easier to train.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1803.03635">Link to the paper</a></p> + </li> +</ul> + +<h2 id="methodology">Methodology</h2> + +<ul> + <li> + <p>Various aspects of the hypothesis are explored empirically.</p> + </li> + <li> + <p>Two tasks are considered - MNIST and XOR.</p> + </li> + <li> + <p>For each task, the paper considers networks of different sizes and empirically shows that larger networks are more likely to converge (or have better performance) for a fixed number of epochs as compared to the smaller networks.</p> + </li> + <li> + <p>Given a large, trained network, some weights (or units) of the network are pruned and the resulting network is reset to its initial random weights.</p> + </li> + <li> + <p>The resulting network is the <em>lottery-ticket</em> in the sense that when the pruned network is trained, it is more likely to converge than an otherwise randomly initialised network of the same size. Further, it is more likely to match the original, larger network in terms of performance.</p> + </li> + <li> + <p>The paper explores different aspects of this experiment:</p> + + <ul> + <li>Pruning Strategies: + <ul> + <li>One-shot strategy prunes the network in one-go while the iterative strategy prunes the network iteratively.</li> + <li>Though the latter is computationally more intensive, it is more likely to find a lottery ticket.</li> + </ul> + </li> + <li> + <p>Size of the pruned network affects the speed of convergence when training the <em>lottery ticket</em>.</p> + </li> + <li> + <p>If only the architecture or only the initial weights of the <em>lottery ticket</em> are used, the resulting network tends to converge more slowly and achieves a lower level of performance.</p> + </li> + <li>This indicates that the lottery ticket depends on both the network architecture and the weight initialization.</li> + </ul> + </li> +</ul> + +<h2 id="discussion">Discussion</h2> + +<ul> + <li> + <p>The paper includes some more interesting experiments. For instance, the distribution of the initialization in the weights that survived the pruning suggests that small weights from before training tend to remain small after training.</p> + </li> + <li> + <p>One interesting experiment would be to show the performance of the pruned network before resetting its weights and retraining again. This performance should be compared with the performance of the initial large network and the performance of the <em>lottery ticket</em> after training.</p> + </li> + <li> + <p>Overall, the experiments are not sufficient to conclude anything about the correctness of the hypothesis. The proposition itself is very interesting and could enhance our understanding of how the neural networks work.</p> + </li> +</ul> + + + + + Cyclical Learning Rates for Training Neural Networks + + 2018-03-18T00:00:00-04:00 + /site/2018/03/18/Cyclical Learning Rates for Training Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Conventional wisdom says that when training neural networks, learning rate should monotonically decrease. This insight forms the basis of the different type of adaptive learning rates.</p> + </li> + <li> + <p>Counter to this expected behaviour, the paper demonstrates that using a cyclical learning rate (CLR), varying between a minimum and a maximum value, helps to train the neural network faster without requiring fine-tuning of learning rate.</p> + </li> + <li> + <p>The paper also provides a simple approach to estimate the lower and upper bound for CLR.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1506.01186">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/bckenstler/CLR">Link to the implementation</a></p> + </li> +</ul> + +<h2 id="intution">Intution</h2> + +<ul> + <li> + <p>Difficulty in minimizing the loss arises from saddle points and not from local minima. <a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">[Ref]</a></p> + </li> + <li> + <p>Increasing the learning rate allows for rapid traversal of saddle points.</p> + </li> + <li> + <p>Alternatively, the optimal learning rate is expected to be between bounds of CLR and thus the learning rate would always be close to the optimal learning rate.</p> + </li> +</ul> + +<h2 id="parameter-estimation">Parameter Estimation</h2> + +<ul> + <li> + <p>Cycle Length = Number of iterations till learning rate returns to the initial value = 2 * step_size</p> + </li> + <li> + <p>step_size should be set to 2-10 times the number of iterations in an epoch.</p> + </li> + <li> + <p>Estimating the CLR boundary values:</p> + + <ul> + <li> + <p>Run the model for several epochs while increasing the learning rate between the allowed low and high values.</p> + </li> + <li> + <p>Plot accuracy vs learning rate and note the learning rate values when the accuracy starts to fall.</p> + </li> + <li> + <p>This gives a good candidate value for upper and lower bound. Alternatively, the lower bound could be set to be 1/3 or 3/4 of the upper bound. But it is difficult to judge if the model has run for the sufficient number of epochs in the first place.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li>The idea in itself is very simple and straight-forward to add to any existing model which makes it very appealing.</li> + <li>The author has experimented with various architectures and datasets (from vision domain) and has reported faster training results.</li> +</ul> + + + + + Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning + + 2018-03-11T00:00:00-05:00 + /site/2018/03/11/Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Information Extraction - Given a query to be answered and an external search engine, information extraction entails the task of issuing search queries, extracting information from new sources and reconciling the extracted values till we are sufficiently confident about the extracted values.</p> + </li> + <li> + <p>The paper proposes the use of Reinforcement Learning (RL) to solve this task.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1603.07954">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/karthikncode/DeepRL-InformationExtraction">Implementation</a></p> + </li> +</ul> + +<h2 id="key-aspect">Key Aspect</h2> + +<ul> + <li>Use of Reinforcement Learning to resolve the ambiguity inherent in the textual documents.</li> + <li>Given a query, the RL agent would use template statement to formulate the queries (to be performed on the black box search engine). It would further resolve and combine the result for the query from the set of retrieved documents.</li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li>Database of Mass Shootings in the United States.</li> + <li>Food Shield database of illegal food adulteration.</li> +</ul> + +<h2 id="framework">Framework</h2> + +<ul> + <li> + <p>Information extraction task is modelled as a Markov Decision Process (MDP) &lt;S, A, T, R&gt;</p> + </li> + <li><strong>S</strong> - Set of all possible states + <ul> + <li>The state consists of: + <ul> + <li>Extractor’s confidence in predicted entity values.</li> + <li>Context from which values are extracted.</li> + <li>Similarity between the new document (extracted just now from the search engine) and the original document accompanying the given query.</li> + </ul> + </li> + </ul> + </li> + <li><strong>A</strong> - Set of all possible actions + <ul> + <li>Reconciliation decision - d + <ul> + <li>Accept all entities values.</li> + <li>Reject all entities values.</li> + <li>Stop the current episode.</li> + </ul> + </li> + <li>Query choice - q + <ul> + <li>Choose the next query from a set of automatically generated alternatives.</li> + </ul> + </li> + </ul> + </li> + <li><strong>R</strong> - Rewards + <ul> + <li>Maximise the final extraction accuracy while minimising the number of queries.</li> + </ul> + </li> + <li><strong>Q</strong> - Queries + <ul> + <li>Generated using a template.</li> + <li>The query is searched on a search engine and the top k links are retrieved.</li> + </ul> + </li> + <li><strong>Transition</strong> + <ul> + <li>Start with a single source article x<sub>i</sub> and extract the initial set of entities.</li> + <li>At each timestep, the agent is given the state (s) on basis of which it chooses the action (d, q). The episode stops whenever the action is a stop action.</li> + </ul> + </li> + <li> + <p>Deep Q Network is used.</p> + </li> + <li>Parameters are learned using SGD and RMSProp.</li> +</ul> + +<h2 id="experimental-setup">Experimental Setup</h2> + +<h3 id="extraction-model">Extraction Model</h3> + +<ul> + <li>Max Entropy Classifier is used as the base extraction system.</li> + <li>First, all the words in the document are tagged as one of the entity types and the mode of these values is used to obtain the set of extracted entities.</li> +</ul> + +<h3 id="baseline">Baseline</h3> + +<ul> + <li>Basic Extractors</li> + <li>Aggregation System which either chooses the entity value with the highest confidence or takes a majority vote over all extracted values.</li> + <li>Meta-Classifier which operates over the same input state space and produces the same set of reconciliation decisions as the DQN.</li> + <li>Oracle Extractor which is computed assuming perfect reconciliation and query decisions on the top of the Maxnet base extractor.</li> +</ul> + +<h3 id="rl-models">RL Models</h3> + +<ul> + <li>RL Basic - Only reconciliation decision.</li> + <li>RL Query - Only query decision with a fixed reconciliation strategy.</li> + <li>RL Extract - the full system with both reconciliation and query decision.</li> +</ul> + +<h2 id="result">Result</h2> + +<ul> + <li>RL Extract obtains substantial gains eg up to 11% over Maxnet.</li> + <li>Simple aggregation schemes do not handle the task well.</li> + <li>In terms of reward structure, providing rewards after each step works better than a single delayed reward.</li> +</ul> + + + + + An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks + + 2018-03-05T00:00:00-05:00 + /site/2018/03/05/An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p><em>Catastrophic Forgetting</em> refers to the phenomenon where when a learning system is trained on two tasks in succession, it may forget how to perform the first task.</p> + </li> + <li> + <p>The paper investigates this behaviour for different learning activations in presence and absence of dropout.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1312.6211">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/goodfeli/forgetting">Link to the implementation</a></p> + </li> +</ul> + +<h2 id="experiment-formulation">Experiment Formulation</h2> + +<ul> + <li> + <p>For each experiment, two tasks are defined - “old” task and “new” task.</p> + </li> + <li> + <p>The network is first trained on the “old” task until the validation set error has not improved for the last 100 epochs.</p> + </li> + <li> + <p>The “best” performing model is then trained for the “new” task until the combined error on the “old” and the “new” validation datasets has not improved in the last 100 epochs.</p> + </li> + <li> + <p>All the tasks used the same model architecture - 2 hidden layers followed by a softmax layer.</p> + </li> + <li>Following activations were tested: + <ul> + <li>Sigmoid</li> + <li>ReLU</li> + <li>Hard Local Winner Takes It All</li> + <li>Maxout</li> + </ul> + </li> + <li> + <p>Models were trained using SGD with or without dropout.</p> + </li> + <li> + <p>For each combination of the model, activation and the training mechanism, a random hyper param search was performed with set of 25 hyperparams.</p> + </li> + <li>The authors took care to keep the hyperparams and other settings consistent and comparable across different experiments. Deviations, wherever applicable, and their reasons were documented.</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>In terms of the relationship between the “old” and the “new” tasks, three kinds of settings are considered:</p> + + <ul> + <li> + <p>The tasks are very very similar but the input is processed in a different format. For this setting, MNIST dataset was used with a different permutation of pixels for the “old” and the “new” task.</p> + </li> + <li> + <p>The tasks are similar but not exactly the same. For this setting, the task was to predict sentiments of reviews across 2 different product categories.</p> + </li> + <li> + <p>In the last setting, 2 dissimilar tasks were used. One task was to predict sentiment of reviews and another task was to perform classification over MNIST dataset (reduced to 2 classes).</p> + </li> + </ul> + </li> + <li> + <p>Using Dropout improved the overall validation performance for all the models for all the tasks.</p> + </li> + <li> + <p>Using Dropout also increase the size of the optimal model across all the activations indicating that maybe the increased size of the model could explain the increased resistance to forgetting. It would have been interesting to check if dropout always selected the largest model possible given the set of the hyperparams.</p> + </li> + <li> + <p>On the dissimilar task, dropout improved the performance while reducing the model size so it might have other properties as well that helps to prevent forgetting.</p> + </li> + <li> + <p>As compared to the choice of training technique, the activation function has a less consistent effect on resistance to forgetting. The paper recommends performing cross-validation for the choice of the activation function. If that is not feasible, maxout activation function with dropout could be used.</p> + </li> +</ul> + + + + + Learning an SAT Solver from Single-Bit Supervision + + 2018-02-24T00:00:00-05:00 + /site/2018/02/24/Learning a SAT Solver from Single-Bit Supervision + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents NeuroSAT, a message passing neural network that is trained to predict if a given SAT can be solved. As a side effect of training, the model also learns how to solve the SAT problem itself without any extra supervision.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1802.03685">Link to the paper</a></p> + </li> +</ul> + +<h2 id="background">Background</h2> + +<ul> + <li> + <p>Given an expression in the propositional logic, the task is to predict if there exists a substitution of variables that make the expression true.</p> + </li> + <li> + <p>The expression itself can be written as a conjunction of disjunctions (“and” over “or”) where each conjunct is called a clause and each variable within a clause is called a literal.</p> + </li> + <li> + <p>Invariants</p> + + <ul> + <li> + <p>The variables or clauses or literals (within the clauses) can be permuted.</p> + </li> + <li> + <p>Every occurrence of a variable can be negated.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="model">Model</h2> + +<ul> + <li> + <p>Given the SAT problem, create an undirected graph of literals, their negations and the clauses they belong to.</p> + </li> + <li> + <p>Put an edge between every literal and the clause to which it belongs and another kind of edge between every literal and its negation.</p> + </li> + <li> + <p>Perform message passing between nodes to obtain vector representations corresponding to each node. Specifically, first, each clause received a message from its neighbours (literals) and updates its embeddings. Then every literal receives a message from its neighbours (both literals and clauses) and updates its embeddings.</p> + </li> + <li> + <p>After T iterations, the nodes vote to decide the prediction of the model as a whole.</p> + </li> + <li> + <p>The model is trained end-to-end using the cross-entropy loss between logit and the true label.</p> + </li> + <li> + <p>Permutation invariance is ensured by operating on the nodes and the edges in the topological order and negation invariance is ensured by treating all literals as the same.</p> + </li> +</ul> + +<h2 id="decoding-satisfying-assignment">Decoding Satisfying Assignment</h2> + +<ul> + <li> + <p>The most interesting aspect of this work is that even though the model was trained to predict if the SAT problem can be satisfied, it is actually possible to extract the correct assignment from the classifier.</p> + </li> + <li> + <p>In the early iterations, all the nodes vote “unsolvable” with low confidence. Then a few nodes start voting “solvable” and then a phase transition happens where most of the nodes start voting “solvable” with high confidence.</p> + </li> + <li> + <p>The model never becomes highly confident that problem is “unsolvable” and almost never guesses “solvable” on an “unsolvable” problem. So in some sense, the model is looking for the combination of literals that actually solves the problem.</p> + </li> + <li> + <p>The authors found that the 2 dimensional PCA projections of the literal embeddings are initially mixed up but become more and more linearly separable as the phase transition happens.</p> + </li> + <li> + <p>Based on this insight, the authors propose to obtain cluster centres C1 and C2, partition the variables according to the cluster centres and then try assignments from both the partitions.</p> + </li> + <li> + <p>This alone provides a satisfying solution in over 70% of the cases when though there is no explicit supervising signal about how to solve the problem.</p> + </li> + <li> + <p>The other strengths of the paper includes</p> + + <ul> + <li> + <p>Generalizing to longer and more difficult SAT problems (than those seen during training).</p> + </li> + <li> + <p>Generalizing to another kind of search problems like graph colouring, clique detection etc (over small random graphs).</p> + </li> + </ul> + </li> + <li> + <p>The paper also reports that by adding supervising signal about which clauses in the given expression are unsatisfiable, it is possible to decode the literals which prove the “unsatisfiability” of an expression at test time. Though not a lot of details have been provided about this part and would probably be covered in the next iteration of the paper.</p> + </li> +</ul> + + + + + + Neural Relational Inference for Interacting Systems + + 2018-02-17T00:00:00-05:00 + /site/2018/02/17/Neural Relational Inference for Interacting Systems + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents Neural Relational Inference (NRI) model which can infer underlying interactions in a dynamical system in an unsupervised manner, using just the observational data in terms of the trajectories.</p> + </li> + <li> + <p>For instance, consider a simulated system where the particles are connected to each other by springs. The observational data does not explicitly specify which particles are connected to each other and only contains information like position and velocity of each particle at different timesteps.</p> + </li> + <li> + <p>The task is to explicitly infer the interaction structure (in this example, which pair of particles are connected to each other) while learning the dynamical model of the system itself.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1802.04687">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/ethanfetaya/nri">Link to the implementation</a></p> + </li> +</ul> + +<h2 id="model">Model</h2> + +<ul> + <li> + <p>The model consists of an encoder that encodes the given trajectories into an interaction graph and a decoder that decodes the dynamical model given the interaction graph.</p> + </li> + <li> + <p>The model starts by assuming that a full connected interaction graph exists between the objects in the system.</p> + </li> + <li> + <p>For this latent graph <strong>z</strong>, <em>z<sub>i, j</sub></em> denotes the (discrete) edge type between object <em>v<sub>i</sub></em> and <em>v<sub>j</sub></em> with the assumption that there are <em>K</em> edge types.</p> + </li> + <li> + <p>The object <em>v<sub>i</sub></em> has a feature vector <em>x<sub>i</sub><sup>t</sup></em> associated with it at time <em>t</em>. This feature vector captures information like location and velocity.</p> + </li> +</ul> + +<h3 id="encoder">Encoder</h3> + +<ul> + <li> + <p>A Graph Neural Network (GNN) acts on the fully connected latent graph <em>z</em>, performs message passing from node to node via edges and predicts the discrete label for each edge.</p> + </li> + <li> + <p>The GNN architecture may itself use MLPs or ConvNets and returns a factorised distribution over the edge types <em>q<sub>φ</sub>(z|x)</em>.</p> + </li> +</ul> + +<h3 id="decoder">Decoder</h3> + +<ul> + <li> + <p>The decoder is another GNN (with separate params for each edge type) that predicts the future dynamics of the system and returns <em>p<sub>θ</sub>(x|z)</em>.</p> + </li> + <li>The overall model is a VAE that optimizes the ELBO given as:</li> + <li> + <p>E<sub>q<sub>φ</sub>(z|x)</sub>[log p<sub>θ</sub>(x|z)] − KL[q<sub>φ</sub>(z|x)||p<sub>θ</sub>(z)]</p> + </li> + <li> + <p><em>p<sub>θ</sub>(x)</em> is the prior which is assumed to be uniform distribution over the edge types.</p> + </li> + <li> + <p>Instead of predicting the dynamics of the system for just the next timestep, the paper chooses to use the prediction multiple steps (10) in the future. This ensures that the interactions can have a significant effect on the dynamics of the system.</p> + </li> + <li>In some cases, like real humans playing a physical sport, the dynamics of the system need not be Markovian and a recurrent decoder is used to model the time dependence.</li> +</ul> + +<h2 id="pipeline">Pipeline</h2> + +<ul> + <li> + <p>Given the dynamical system, run the encoder to obtain <em>q<sub>φ</sub>(z|x)</em>.</p> + </li> + <li> + <p>Sample <em>z<sub>i, j</sub></em> from <em>q<sub>φ</sub>(z|x)</em>.</p> + </li> + <li> + <p>Run the decoder to predict the future dynamics for the next T timesteps.</p> + </li> + <li> + <p>Optimise the ELBO loss.</p> + </li> + <li> + <p>Note that since the latent variables (edge labels) are discrete in this case, the sampling is done from a continuous approximation of the discrete distribution and reparameterization trick is applied over this discrete approximation to get the (biased) gradients.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>Experiments are performed using simulated systems like particles connected to springs, phase coupled oscillators and charged particles and using real-world data like CMU Motion Capture database and NBA tracking data.</p> + </li> + <li> + <p>The NRI system effectively predicts the dynamics of the systems and is able to reconstruct the ground truth interaction graph (for simulated systems).</p> + </li> +</ul> + + + + + Stylistic Transfer in Natural Language Generation Systems Using Recurrent Neural Networks + + 2018-02-11T00:00:00-05:00 + /site/2018/02/11/Stylistic Transfer in Natural Language Generation Systems Using Recurrent Neural Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li><a href="https://aclweb.org/anthology/W/W16/W16-6010.pdf">This workshop paper</a> explores the problem of style transfer in natural language generation (NLG).</li> + <li>One possible manifestation would be rewriting technical articles in an easy-to-understate manner.</li> +</ul> + +<h2 id="challenges">Challenges</h2> + +<ul> + <li>Identifying relevant stylistic cues and using them to control text generation in NLG systems.</li> + <li>Absence of a large amount of training data.</li> +</ul> + +<h2 id="pitch">Pitch</h2> + +<ul> + <li>Using Recurrent Neural Networks (RNNs) to disentangle the style from semantic content.</li> + <li>Autoencoder model with two components - one for learning style and another for learning content.</li> + <li>This allows for “style” component to be replaced while keeping the “content” component same, resulting in a style transfer.</li> + <li>One way to think about this is - the encoder generates a 100-dimensional vector. In this, the first 50 entries, correspond to the “style” component and remaining to the “content” component.</li> + <li>The proposal is that the loss function should be modified to include a cross-covariance term for ensuring disentanglement.</li> + <li>I think one way of doing this is to have two loss functions: + <ul> + <li>The <strong>first loss</strong> function ensures that the input sentence is decoded properly into the target sentence. This loss is computed for each sentence.</li> + <li>The <strong>second loss</strong> ensures that the first 50 entries across all the encoded represenations are are correlated. This loss operates at the batch level.</li> + <li>The <strong>total loss</strong> is the weighted sum of these 2 losses.</li> + </ul> + </li> +</ul> + +<h2 id="possible-datasets">Possible Datasets</h2> + +<ul> + <li><a href="http://norvig.com/ngrams/shakespeare.txt">Complete works of Shakespeare</a></li> + <li><a href="https://www.kaggle.com/c/wikichallenge/data">Wikpedia Kaggle dataset</a></li> + <li><a href="https://ota.ox.ac.uk/">Oxford Text Archive</a></li> + <li>Twitter data</li> +</ul> + +<h2 id="possible-metrics">Possible Metrics</h2> + +<ul> + <li>Soundness - is the generated text entailed with the input sentence.</li> + <li>Coherence - free of grammatical errors, proper word usage etc.</li> + <li>Effectiveness - how effective was the style transfer</li> + <li>Since some of the metrics are subjective, human evaluators also need to be employed.</li> +</ul> + + + + + Get To The Point - Summarization with Pointer-Generator Networks + + 2018-02-05T00:00:00-05:00 + /site/2018/02/05/Get To The Point-Summarization with Pointer-Generator Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p><a href="https://gist.github.com/shagunsodhani/a2915921d7d0ac5cfd0e379025acfb9f">Sequence-to-Sequence models</a> have made abstract summarization viable but they still suffer from issues like <em>out of vocabulary</em> words and repetitive sentences.</p> + </li> + <li> + <p>The paper proposes to overcome these limitations by using a hybrid Pointer-Generator network (to copy words from the source text) and a <em>coverage</em> vector that keeps track of content that has already been summarized so as to discourage repetition.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1704.04368">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/abisee/pointer-generator">Code</a></p> + </li> +</ul> + +<h2 id="model">Model</h2> + +<h3 id="pointer-generator-network">Pointer Generator Network</h3> + +<ul> + <li> + <p>It is a hybrid model between the Sequence-to-Sequence network and <a href="https://shagunsodhani.in/papers-I-read/Pointer-Networks">Pointer Network</a> such that when generating a word, the model decides whether the word would be generated using the softmax vocabulary (Sequence-to-Sequence) or using the source vocabulary (Pointer Network).</p> + </li> + <li> + <p>Since the model can choose a word from the source vocabulary, the issue of <em>out of vocabulary</em> words is handled.</p> + </li> +</ul> + +<h3 id="coverage-mechanism">Coverage Mechanism</h3> + +<ul> + <li> + <p>The model maintains a <em>coverage</em> vector which is the sum of attention distributions over all previous decoder timesteps.</p> + </li> + <li> + <p>This <em>coverage</em> vector is fed as an input to the attention mechanism.</p> + </li> + <li> + <p>A <em>coverage loss</em> is added to prevent the model from repeatedly attending to the same word.</p> + </li> + <li> + <p>The idea is to capture how much coverage different words have already received from the attention mechanism.</p> + </li> +</ul> + +<h2 id="observation">Observation</h2> + +<ul> + <li> + <p>Model when evaluated on CNN/Daily Mail summarization task, outperforms the state-of-the-art by at least 2 ROUGE points though it still does not outperform the lead-3 baseline.</p> + </li> + <li> + <p>Lead-3 baseline uses first 3 sentences as the summary of the article which should be a strong baseline given that the dataset is actually about news articles.</p> + </li> + <li> + <p>The model is initially trained without coverage and then finetuned with the coverage loss.</p> + </li> + <li> + <p>During training, the model first learns how to copy words and then how to generate words (p<sup>gen</sup> starts from 0.3 and converges to 0.53).</p> + </li> + <li> + <p>During testing, the model strongly prefers copying over generating (p<sup>gen</sup> = 0.17).</p> + </li> + <li> + <p>Further, whenever the model is at beginning of sentences or at the join between switched-together fragments, it prefers to generate a word instead of copying one from the source language.</p> + </li> + <li> + <p>The overall model is very simple, neat and interpretable and also performs well in practice.</p> + </li> +</ul> + + + + + StarSpace - Embed All The Things! + + 2018-01-29T00:00:00-05:00 + /site/2018/01/29/StarSpace - Embed All The Things + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper describes a general purpose neural embedding model where different type of entities (described in terms of discrete features) are embedded in a common vector space.</p> + </li> + <li> + <p>A similarity function is learnt to compare these entities in a meaningful way and score their similarity. The definition of the similarity function could depend on the downstream task where the embeddings are used.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1709.03856">Link to the paper</a></p> + </li> + <li> + <p><a href="https://github.com/facebookresearch/StarSpace">Link to the implementation</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Each entity is described as a set of discrete features. For example, for the recommendation use case, the users may be described as a bag-of-words of movies they have liked. For the search use case, the document may be described as a bag-of-words of words they are made up of.</p> + </li> + <li> + <p>Given a dataset and a task at hand, generate a set of positive samples <em>E = (a, b)</em> such that <em>a</em> is the input to the task (from the dataset) and <em>b</em> is the expected label(answer/entity) for the given task.</p> + </li> + <li> + <p>Similarly, generate another set of negative samples <em>E <sup>-</sup> = (a, b<sub>i</sub><sup>-</sup>)</em> such that <em>b<sub>i</sub><sup>-</sup></em> is one of the incorrect label(answer/entity) for the given task. The incorrect entity can be sampled randomly from the set of candidate entities. Multiple incorrect samples could be generated for each positive example. These incorrect samples are indexed using <em>i</em>.</p> + </li> + <li> + <p>For example, in case of supervised learning problem like document classification, <em>a</em> would be one of the documents (probably described in terms of words), <em>b</em> is the correct label and <em>b<sub>i</sub><sup>-</sup>)</em> is one of the randomly sampled label from set of all the labels (excluding the correct label).</p> + </li> + <li> + <p>In case of collaborative filtering, <em>a</em> would be the user (either described as a discrete entity like a userid or in terms of items purchased so far), <em>b</em> is the next item the user purchases and <em>b<sub>i</sub><sup>-</sup>)</em> is one of the randomly sampled item from the set of all the items.</p> + </li> + <li> + <p>A similarity function is chosen to compare the representation of entities of type <em>a</em> and <em>b</em>. The paper considered cosine similarity and inner product and observed that cosine similarity works better for the case with a large number of entities.</p> + </li> + <li> + <p>A loss function compares the similarity between positive pairs <em>(a, b)</em> and <em>(a, b<sub>i</sub><sup>-</sup>)</em>. The paper considered margin ranking loss and negative log loss of softmax and reported that margin ranking loss works better.</p> + </li> + <li> + <p>The norm of embeddings is capped at 1.</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>The same model architecture is applied to a variety of tasks including multi-class classification, multi-label classification, collaborative filtering, content-based recommendation, link prediction, information retrieval, word embeddings and sentence embeddings.</p> + </li> + <li> + <p>The model provides a strong baseline on all the tasks and performs at par with much more complicated and task-specific networks.</p> + </li> +</ul> + + + + + + Emotional Chatting Machine - Emotional Conversation Generation with Internal and External Memory + + 2018-01-22T00:00:00-05:00 + /site/2018/01/22/Emotional Chatting Machine-Emotional Conversation Generation with Internal and External Memory + <ul> + <li> + <p>The paper proposes ECM (Emotional Chatting Machine) which can generate both semantically and emotionally appropriate responses in a dialogue setting.</p> + </li> + <li> + <p>More specifically, given an input utterance or dialogue and the desired emotional category of the response, ECM is to generate an appropriate response that conforms to the given emotional category.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1704.01074">Link to the paper</a></p> + </li> + <li> + <p>Much of the recent, deep learning based work on conversational agents has focused on the use of encoder-decoder framework where the input utterance (given sequence of words) is mapped to a response utterance (target sequence of words). This is the so-called seq2seq family of models.</p> + </li> + <li> + <p>ECM model can sit within this framework and introduces 3 new components:</p> + + <ul> + <li><strong>Emotion Category Embedding</strong> + <ul> + <li>Embed the emotion categories into a real-valued, low-dimensional vector space.</li> + <li>These embeddings are used as input to the decoder and are learnt along with rest of the model.</li> + </ul> + </li> + <li><strong>Internal Memory</strong> + <ul> + <li>Physiological, emotional responses are relatively short-lived and involve changes.</li> + <li>ECM accounts for this effect by adding an Internal Memory which captures this dynamics of emotions during decoding.</li> + <li>It starts with “full” emotions in the beginning and keeps decaying the emotion value over time.</li> + <li>How much of the emotion value is to be decayed is determined by a sigmoid gate.</li> + <li>By the time the sentence is decoded, the value becomes zero, signifying that the emotion has been completely expressed.</li> + </ul> + </li> + <li><strong>External Memory</strong> + <ul> + <li>Emotional responses are expected to carry emotionally strong words along with generic, neutral words.</li> + <li>An external memory is used to include the emotionally strong words explicitly by using 2 non-overlapping vocabularies - <em>generic</em> vocabulary and the <em>emotion</em> vocabulary (read from the external memory).</li> + <li>Both these vocabularies are assigned different generation probabilities and an output gate controls the weights of <em>generic</em> and <em>emotion</em> words.</li> + <li>This way the <em>emotion</em> words are included in an otherwise neutral response.</li> + </ul> + </li> + </ul> + </li> + <li> + <p><strong>Loss function</strong></p> + + <ul> + <li>The first component is the cross-entropy loss between predicted and target token distribution.</li> + <li>A regularization term on internal memory to make sure the emotional state decays to 0 at the end of the decoding process.</li> + <li>Another regularization term on external memory to supervise the probability of selection of a <em>generic</em> vs <em>emotion</em> word.</li> + </ul> + </li> + <li><em>*Dataset</em> + <ul> + <li>STC Dataset (~220K posts and ~4300K responses) annotated by the emotional classifier. Any error on the part of the classifier degrades the quality of the training dataset.</li> + <li>NLPCC Dataset - Emotion classification dataset with 23105 sentences.</li> + </ul> + </li> + <li> + <p><strong>Metric</strong></p> + + <ul> + <li>Perplexity to evaluate the model at the content level.</li> + <li>Emotion accuracy to evaluate the model at the emotional level.</li> + </ul> + </li> + <li> + <p>ECM achieves a perplexity of 65.9 and emotional accuracy of 0.773.</p> + </li> + <li> + <p>Based on human evaluations, ECM statistically outperforms the seq2seq baselines on both naturalness (likeliness of response being generated by a human) and emotion accuracy.</p> + </li> + <li> + <p>Notes</p> + + <ul> + <li>It is an interesting idea to let the sigmoid gate decide how the emotion “value” be spent while decoding. It seems similar to the idea of how much do we want to “attend” to the emotion value the key difference being that your total attention is limited. It would be interesting to see the shape of the distribution of how much of the emotion value is spent at each decoding time step. If the curve is highly biased towards say using most of the emotion value towards the end of the decoding process, maybe another regularisation term is needed to ensure a more balanced distribution of how the emotion is spent.</li> + </ul> + </li> +</ul> + + + + + Exploring Models and Data for Image Question Answering + + 2018-01-14T00:00:00-05:00 + /site/2018/01/14/Exploring Models and Data for Image Question Answering + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p><strong>Problem Statement</strong>: Given an image, answer a given question about the image.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1505.02074">Link to the paper</a></p> + </li> + <li> + <p><strong>Assumptions</strong>:</p> + <ul> + <li>The answer is assumed to be a single word thereby bypassing the evaluation issues of multi-word generation tasks.</li> + </ul> + </li> +</ul> + +<h2 id="vis-lstm-model">VIS-LSTM Model</h2> + +<ul> + <li>Treat the input image as the first word in the question.</li> + <li>Obtain the vector representation (skip-gram) for words in the question.</li> + <li>Obtain the VGG Net embeddings of the image and use a linear transformation (dimensionality reduction weight matrix) to match the dimensions of word embeddings.</li> + <li>Keep image embedding frozen during training and use an LSTM to combine the word vectors.</li> + <li>LSTM outputs are fed into a softmax layer which generates the answer.</li> +</ul> + +<h2 id="dataset">Dataset</h2> + +<ul> + <li>DAtaset for QUestion Ansering on Real-world images (DAQUAR) + <ul> + <li>1300 images and 7000 questions with 37 object classes.</li> + <li>Downside is that even guess work can yield good results.</li> + </ul> + </li> + <li>The paper proposed an algorithm for generating questions using MS-COCO dataset. + <ul> + <li>Perform preprocessing steps like breaking large sentences and changing indefinite determines to definite ones.</li> + <li><em>object</em> questions, <em>number</em> questions, <em>colour</em> questions and <em>location</em> questions can be generated by searching for nouns, numbers, colours and prepositions respectively.</li> + <li>Resulting dataset has ~120K questions across above 4 semantic types.</li> + </ul> + </li> +</ul> + +<h2 id="models">Models</h2> + +<ul> + <li>VIS+LSTM - explained above</li> + <li>2-VIS+BLSTM - Add the image features twice, in beginning and in the end (using different linear transformations) plus use bidirectional LSTM</li> + <li>IMG+BOW - Multinomial logistic regression on image features without dimensionality reduction + bag of words (averaging word vectors).</li> + <li>FULL - Simple average of above 2 models.</li> +</ul> + +<h3 id="baseline">Baseline</h3> + +<ul> + <li>Includes models where the answer is guessed, or only image or question features are used or image features along with prior knowledge of object are used.</li> + <li>Also includes a KNN model where the system finds the nearest (image, question) pair.</li> +</ul> + +<h3 id="metrics">Metrics</h3> + +<ul> + <li>Accuracy</li> + <li>Wu-Palmer similarity measure</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li>The VIS-LSTM model outperforms the baselines while the FULL model benefits from averaging across all the models.</li> + <li>Some useful information seems to be lost when downsizing the VGG vectors.</li> + <li>Fine tuning the word vectors helps with performance.</li> + <li>Normalising CNN hidden image features into zero mean and unit variance leads to faster training.</li> + <li>Model does not perform well on the task of considering spatial relations between multiple objects and counting objects when multiple objects are present</li> +</ul> + + + + + How transferable are features in deep neural networks + + 2018-01-06T00:00:00-05:00 + /site/2018/01/06/How transferable are features in deep neural networks + <h1 id="introduction">Introduction</h1> + +<ul> + <li> + <p>When neural networks are trained on images, they tend to learn the same kind of features for the first layer (corresponding to Gabor filters or colour blobs). The first layer features are “general” irrespective of the task/optimizer etc.</p> + </li> + <li> + <p>The final layer features tend to be “specific” in the sense that they strongly depend on the task.</p> + </li> + <li> + <p>The paper studies the transition of generalization property across layers in the network. This could be useful in the domain of transfer learning where features are reused across tasks.</p> + </li> + <li> + <p><a href="http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf">Link to the paper</a></p> + </li> +</ul> + +<h1 id="setup">Setup</h1> + +<ul> + <li> + <p>Degree of generality of a set of features, learned on task A, is defined as the extent to which these features can be used for another task B.</p> + </li> + <li> + <p>Randomly split 1000 ImageNet classes into 2 groups (corresponding to tasks A and B). Each group has 500 classes and half the total number of examples.</p> + </li> + <li> + <p>Two 8-layer convolutional networks are trained on the two datasets and labelled as baseA and baseB respectively.</p> + </li> + <li> + <p>Now choose a layer numbered n from {1, 2…7}.</p> + </li> + <li> + <p>For each layer n, train the following two networks:</p> + + <ul> + <li><strong>Selffer Network BnB</strong> + <ul> + <li>Copy (and freeze) first n layers from baseB. The remaining layers are initialized randomly and trained on B.</li> + <li>This serves as the control group.</li> + </ul> + </li> + <li><strong>Transfer Network AnB</strong> + <ul> + <li>Copy (and freeze) first n layers from baseA. The remaining layers are initialized randomly and trained on B.</li> + <li>This corresponds to transferring features from A to B.</li> + </ul> + </li> + </ul> + </li> + <li> + <p>If AnB performs well, n<sup>th</sup> layer features are “general”.</p> + </li> + <li> + <p>In another setting, the transferred layers are also fine-tuned (BnB<sup>+</sup> and AnB<sup>+</sup>).</p> + </li> + <li> + <p>ImageNet dataset contains a hierarchy of classes which allow for creating the datasets A and B with high and low similarity.</p> + </li> +</ul> + +<h1 id="observation">Observation</h1> + +<h2 id="dataset-a-and-b-are-similar">Dataset A and B are similar</h2> + +<ul> + <li> + <p>For n = {1, 2}, the performance of the BnB model is same as baseB model. For n = {3, 4, 5, 6}, the performance of BnB model is worse.</p> + </li> + <li> + <p>This indicates the presence of “fragile co-adaption” features on successive layers where features interact with each other in a complex way and can not be easily separated across layers. This is more prominent across middle layers and less across the first and the last layers.</p> + </li> + <li> + <p>For model AnB, the performance of baseB for n = {1, 2}. Beyond that, the performance begins to drop.</p> + </li> + <li> + <p>Transfer learning of features followed by fine-tuning gives better results than training the network from scratch.</p> + </li> +</ul> + +<h2 id="dataset-a-and-b-are-dissimilar">Dataset A and B are dissimilar</h2> + +<ul> + <li>Effectiveness of feature transfer decreases as the two tasks become less similar.</li> +</ul> + +<h2 id="random-weights">Random Weights</h2> + +<ul> + <li> + <p>Instead of using transferred weights in BnB and BnA, the first n layers were initialized randomly.</p> + </li> + <li> + <p>The performance falls for layer 1 and 2. It further drops to near-random level for layers 3 and beyond.</p> + </li> + <li> + <p>Another interesting insight is that even for dissimilar tasks, transferring features is better than using random features.</p> + </li> +</ul> + + + + + Distilling the Knowledge in a Neural Network + + 2017-12-31T00:00:00-05:00 + /site/2017/12/31/Distilling the Knowledge in a Neural Network + <h1 id="introduction">Introduction</h1> + +<ul> + <li> + <p>In machine learning, it is common to train a single large model (with a large number of parameters) or ensemble of multiple smaller models using the same dataset.</p> + </li> + <li> + <p>While such large models help to improve the performance of the system, they also make it difficult and computationally expensive to deploy the system.</p> + </li> + <li> + <p>The paper proposes to transfer the knowledge from such “cumbersome” models into a single, “simpler” model which is more suitable for deployment. This transfer of knowledge is referred to as “distillation”.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1503.02531">Link to the paper</a></p> + </li> +</ul> + +<h1 id="idea">Idea</h1> + +<ul> + <li> + <p>Train the cumbersome model using the given training data in the usual way.</p> + </li> + <li> + <p>Train the simpler, distilled model using the class probabilities (from the cumbersome model) as the soft target. Thus, the simpler model is trained to generalise the same way as the cumbersome model.</p> + </li> + <li> + <p>If the soft targets have high entropy, they provide much more information than the hard targets and the gradient (between training examples) would vary lesser.</p> + </li> + <li> + <p>One approach is to minimise the L2 difference between logits produced by the cumbersome model and the simpler model. This approach was pursued by <a href="https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf">Buciluǎ et al.</a></p> + </li> + <li> + <p>The paper proposes a more general solution which they name “distillation”. The temperature of the final softmax is increased till the cumbersome model produces a set of soft targets (from the final softmax layer). These soft targets are then used to train the simpler model.</p> + </li> + <li> + <p>It also shows that the proposed approach is, in fact, a more general case of the first approach.</p> + </li> +</ul> + +<h1 id="approach">Approach</h1> + +<ul> + <li> + <p>In the simplest setting, the cumbersome model is first trained with a high value of temperature and then the same temperature value is used to train the simpler model. The temperature is set to 1 when making predictions using the simpler model.</p> + </li> + <li> + <p>It helps to add an auxiliary objective function which corresponds to the cross-entropy loss with the correct labels. The second objective function should be given a much lower weight though. Further, the magnitude of the soft targets needs to be scaled by multiplying with the square of temperature.</p> + </li> +</ul> + +<h1 id="experiment">Experiment</h1> + +<ul> + <li> + <p>The paper reports favourable results for distillation task for the following domains:</p> + + <ul> + <li> + <p>Image Classification (on MNIST dataset)</p> + + <ul> + <li>An extra experiment is performed where the simpler model is not shown any images of “3” but the model fails for only 133 cases out of 1010 cases involving “3”.</li> + </ul> + </li> + <li> + <p>Automatic Speech Recognition (ASR)</p> + + <ul> + <li> + <p>An extra experiment is performed where the baseline model is trained using both hard targets and soft targets alternatively. Further, only 3% of the total dataset is used.</p> + </li> + <li> + <p>The model using hard targets overfits and has poor test accuracy while the model using soft targets does not overfit and gets much better test accuracy. This shows the regularizing effect of soft targets.</p> + </li> + </ul> + </li> + <li> + <p>Training ensemble specialists for very large datasets (JFT dataset - an internal dataset at Google)</p> + + <ul> + <li> + <p>The experiment shows that while training a single large model would take a lot of time, the performance of the model can be improved by learning a small number of specialised networks (which are faster to train).</p> + </li> + <li> + <p>Though it is yet to be shown that the knowledge of such specialist models can be distilled back into a single model.</p> + </li> + </ul> + </li> + </ul> + </li> +</ul> + + + + + PTE - Predictive Text Embedding through Large-scale Heterogeneous Text Networks + + 2017-12-24T00:00:00-05:00 + /site/2017/12/24/PTE - Predictive Text Embedding through Large-scale Heterogeneous Text Networks + <h1 id="introduction">Introduction</h1> + +<ul> + <li> + <p>Unsupervised text embeddings can be generalized for different tasks but they have weaker predictive powers (as compared to end-to-end trained deep learning methods) for any particular task. But the deep learning techniques are expensive and need a large amount of supervised data and a large number of parameters to tune.</p> + </li> + <li> + <p>The paper introduces Predictive Text Embedding (PTE) - a semi-supervised approach which learns an effective low dimensional representation using a large amount of unsupervised data and a small amount of supervised data.</p> + </li> + <li> + <p>The work can be extended to general information networks as well as classic techniques like MDS, Iso-map, Laplacian EigenMaps etc do not scale well for large graphs.</p> + </li> + <li> + <p>Further, this model can be applied to heterogeneous networks as well unlike the previous works <a href="https://arxiv.org/abs/1503.03578">LINE</a> and <a href="https://arxiv.org/abs/1403.6652">DeepWalk</a> which work on homogeneous networks only.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1508.00200">Link to the paper</a></p> + </li> +</ul> + +<h1 id="approach">Approach</h1> + +<ul> + <li> + <p>The paper proposes 3 different kinds of networks:</p> + + <ul> + <li><strong>Word-Word Network</strong> which captures the word co-occurrence information (local level).</li> + <li><strong>Word-Document Network</strong> which captures the word-document co-occurrence information (local + document level).</li> + <li><strong>Word-Label Network</strong> which captures the word-label co-occurrence information (bipartite graph).</li> + </ul> + </li> + <li> + <p>All 3 graphs are integrated into one heterogeneous text network.</p> + </li> + <li> + <p>First, the authors extend their previous work, LINE, for heterogenous bipartite text networks as explained:</p> + + <ul> + <li> + <p>Given a bipartite graph <em>G = (V<sub>A</sub> \bigcup V<sub>B</sub>, E)</em> , where <em>V<sub>A</sub> and V<sub>B</sub></em> are disjoint set of vertices, the conditional probability of <em>v<sub>a</sub></em> (in set <em>V<sub>A</sub></em>) being generated by <em>v<sub>b</sub></em> (in set <em>V<sub>B</sub></em>) is given as the softmax score between embeddings of <em>v<sub>a</sub></em> and <em>v<sub>b</sub></em> and normalised by the sum of exponentials of dot products between <em>v<sub>b</sub></em> and all nodes in <em>V<sub>A</sub></em>.</p> + </li> + <li> + <table> + <tbody> + <tr> + <td>The second order proximity can be determined by the conditional distributions *p(.</td> + <td>v<sub>j</sub>)*p(.</td> + <td>v<sub>j</sub>)*.</td> + </tr> + </tbody> + </table> + </li> + <li> + <p>The objective to be minimised the KL divergence between the conditional distribution <em>p(.\v<sub>j</sub>)</em> and the emperical distribution <em>p<sup>^</sup>(.\v<sub>j</sub>)</em> (given as w<sub>i, j</sub>/deg<sub>j</sub>).</p> + </li> + <li>The objective can be further simplified and optimised using SGD with edge sampling and negative sampling.</li> + </ul> + </li> + <li> + <p>Now, the 3 individual networks can all be interpreted as bipartite networks. So node representation of all the 3 individual networks is obtained as described above.</p> + </li> + <li> + <p>For the word-label network, since the training data is sparse, one could either train the unlabelled networks first and then the labelled network or they all could be trained jointly.</p> + </li> + <li> + <p>For the case of joint training, the edges are sampled from the 3 networks alternatively.</p> + </li> + <li> + <p>For the fine-tuning case, the edges are first sampled from the unlabelled network and then from the labelled network.</p> + </li> + <li> + <p>Once the word embeddings are obtained, the text embeddings may be obtained by simply averaging the word embeddings.</p> + </li> +</ul> + +<h1 id="evaluation">Evaluation</h1> + +<ul> + <li> + <p><strong>Baseline Models</strong></p> + + <ul> + <li>Local word co-occurence based methods - SkipGram, LINE(Gww)</li> + <li>Document word co-occurence based methods - LINE(Gwd), PV-DBOW</li> + <li>Combined method - LINE (Gww + Gwd)</li> + <li>CNN</li> + <li>PTE</li> + </ul> + </li> + <li> + <p>For long documents, PTE (joint) outperforms CNN and other PTE variants and is around 10 times faster than CNN model.</p> + </li> + <li> + <p>For short documents, PTE (joint) does not always outperform CNN model probably because the word sense ambiguity is more relevant in the short documents.</p> + </li> +</ul> + + + + + Revisiting Semi-Supervised Learning with Graph Embeddings + + 2017-12-11T00:00:00-05:00 + /site/2017/12/11/Revisiting Semi-Supervised Learning with Graph Embeddings + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a semi-supervised learning framework for graphs where the node embeddings are used to jointly predict both the class labels and neighbourhood context. Usually, graph embeddings are learnt in an unsupervised manner and can not leverage the supervising signal coming from the labelled data.</p> + </li> + <li> + <p>The framework is called <a href="https://github.com/kimiyoung/planetoid">Planetoid (Predicting Labels And Neighbors with Embeddings Transductively Or Inductively from Data)</a>.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1603.08861">Link to the paper</a></p> + </li> +</ul> + +<h2 id="problem-setting">Problem Setting</h2> + +<ul> + <li> + <p>Given a graph G = (V, E) and x<sub>L</sub> and x<sub>U</sub> as feature vectors for labelled and unlabelled nodes and y<sub>L</sub> as labels for the labelled nodes, the problem is to learn a mapping (classifier) f: x -&gt; y</p> + </li> + <li> + <p>There are two settings possible:</p> + + <ul> + <li> + <p><strong>Transductive</strong> - Predictions are made only for those nodes which are already observed in the graph at training time.</p> + </li> + <li> + <p><strong>Inductive</strong> - Predictions are made for nodes whether they have been observed in the graph at training time or not.</p> + </li> + </ul> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>The general semi-supervised learning loss would be <em>L<sub>S</sub> + λL<sub>U</sub></em> where <em>L<sub>S</sub></em> is the supervised learning loss while <em>L<sub>U</sub></em> is the unsupervised learning loss.</p> + </li> + <li> + <p>The unsupervised loss is a variant of the Skip-gram loss with negative edge sampling.</p> + </li> + <li> + <p>More specifically, first a random walk sequence S is sampled. Then either a positive edge is sampled from S (within a given context distance) or a negative edge is sampled.</p> + </li> + <li> + <p>The label information is injected by using the label as a context and minimising the distance between the positive edges (edges where the nodes have the same label) and maximising the distance between the negative edges (edges where the nodes have different labels).</p> + </li> +</ul> + +<h3 id="transductive-formulation">Transductive Formulation</h3> + +<ul> + <li> + <p>Two separate fully connected networks are applied over the node features and node embeddings.</p> + </li> + <li> + <p>These 2 representations are then concatenated and fed to a softmax classifier to predict the class label.</p> + </li> +</ul> + +<h3 id="inductive-formulation">Inductive Formulation</h3> + +<ul> + <li> + <p>In the inductive setting, it is difficult to obtain the node embeddings at test time. One naive approach is to retrain the network to obtain the embeddings on the previously unobserved nodes but that is inefficient.</p> + </li> + <li> + <p>The embeddings of node x are parameterized as a function of its input feature vector and is learnt by applying a fully connected neural network on the node feature vector.</p> + </li> + <li> + <p>This provides a simple way to extend the original approach to the inductive setting.</p> + </li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li> + <p>The proposed approach is evaluated in 3 settings (text classification, distantly supervised entity extraction and entity classification) and it consistently outperforms approaches that use just node features or node embeddings.</p> + </li> + <li> + <p>The key takeaway is that the joint training in the semi-supervised setting has several benefits over the unsupervised setting and that using the graph context (in terms of node embeddings) is much more effective than using graph Laplacian-based regularization term.</p> + </li> +</ul> + + + + + Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension + + 2017-11-28T00:00:00-05:00 + /site/2017/11/28/Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper proposes a two-stage synthesis network that can perform transfer learning for the task of machine comprehension.</li> + <li> + <p>The problem is the following:</p> + + <ul> + <li> + <p>We have a domain D<sub>S</sub> for which we have labelled dataset of question-answer pairs and another domain D<sub>T</sub> for which we do not have any labelled dataset.</p> + </li> + <li> + <p>We use the data for domain D<sub>S</sub> to train SynNet and use that to generate synthetic question-answer pairs for domain D<sub>T</sub>.</p> + </li> + <li> + <p>Now we can train a machine comprehension model M on D<sub>S</sub> and finetune using the synthetic data for D<sub>T</sub>.</p> + </li> + </ul> + </li> + <li><a href="https://www.microsoft.com/en-us/research/publication/two-stage-synthesis-networks-transfer-learning-machine-comprehension/">Link to the paper</a></li> +</ul> + +<h2 id="synnet">SynNet</h2> + +<ul> + <li> + <p>Works in two stages:</p> + + <ul> + <li>Answer Synthesis - Given a text paragraph, generate an answer.</li> + <li>Question Synthesis - Given a text paragraph and an answer, generate a question.</li> + </ul> + </li> +</ul> + +<h3 id="answer-synthesis-network">Answer Synthesis Network</h3> + +<ul> + <li>Given the labelled dataset for D<sub>S</sub>, generate a labelled dataset of &lt;word, tag&gt; pair such that each word in the given paragraph is assigned one of the 4 tags: + <ul> + <li>IOB<sub>start</sub> - if it is the starting word of an answer</li> + <li>IOB<sub>mid</sub> - if it is the intermediate word of an answer</li> + <li>IOB<sub>end</sub> - if it is the ending word of an answer</li> + <li>IOB<sub>none</sub> - if it is not part of any answer</li> + </ul> + </li> + <li> + <p>For training, map the words to their GloVe embeddings and pass through a Bi-LSTM. Next, pass them through two-FC layers followed by a softmax layer.</p> + </li> + <li>For the target domain D<sub>T</sub>, all the consecutive word spans where no label is IOB<sub>none</sub> are returned as candidate answers.</li> +</ul> + +<h3 id="question-synthesis-network">Question Synthesis Network</h3> + +<ul> + <li> + <p>Given an input paragraph and a candidate answer, Question Synthesis network generates question one word at a time.</p> + </li> + <li> + <p>Map each word in the paragraph to their GloVe embedding. After the word vector, append a ‘1’ if the word was part of the candidate answer else append a ‘0’.</p> + </li> + <li> + <p>Feed to a Bi-LSTM network (encoder-decoder) where the decoder conditions on the representation generated by the encoder as well as the question tokens generated so far. Decoding is stopped when “END” token is produced.</p> + </li> + <li> + <p>The paragraph may contain some named entities or rare words which do not appear in the softmax vocabulary. To account for such words, a copying mechanism is also incorporated.</p> + </li> + <li> + <p>At each time step, a Pointer Network (C<sub>P</sub>) and a Vocabulary Predictor (V<sub>P</sub>) are used to generate probability distribution for the next word and a Latent Predictor Network is used to decide which of the two networks would be used for the prediction.</p> + </li> + <li> + <p>At inference time, a greedy decoding is used where the most likely predictor is chosen and then the most likely word from that predictor is chosen.</p> + </li> +</ul> + +<h3 id="machine-comprehension-model">Machine Comprehension Model</h3> + +<ul> + <li>Given any MC model, first train it over domain D<sub>S</sub> and then fine-tune using the artificial questions generated using D<sub>T</sub>.</li> +</ul> + +<h3 id="implementation-details">Implementation Details</h3> + +<ul> + <li> + <p><strong>Data Regularization</strong> - There is a need to alternate between mini batches from source and target domain while fine-tuning the MC model.</p> + </li> + <li> + <p>At inference time, the fine-tuned MC model is used to get the distribution P(i=start) and P(i=end) (corresponding to the likelihood of choosing word I as the starting or ending word for the answer) for all the words and DP is used to find the optimal answer span.</p> + </li> + <li> + <p><strong>Checkpoint Averaging</strong> - Use the different checkpointed models to average the answer likelihood before running DP.</p> + </li> + <li> + <p>Using the synthetically generated dataset helps to gain a 2% improvement in terms of F-score (from SQuAD -&gt; NewsQA). Using checkpointed models further improves the performance to overall 46.6% F score which closes the gap with respect to the performance of model trained on NewsQA itself (~52.3% F score)</p> + </li> +</ul> + + + + + + Higher-order organization of complex networks + + 2017-11-19T00:00:00-05:00 + /site/2017/11/19/Higher-order organization of complex networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a generalized framework for graph clustering (clusters of network motifs) on the basis of higher-order connectivity patterns.</p> + </li> + <li> + <p><a href="http://science.sciencemag.org/content/353/6295/163">Link to the paper</a></p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Given a <a href="https://shagunsodhani.in/papers-I-read/Network-Motifs-Simple-Building-Blocks-of-Complex-Networks">motif M</a>, the framework aims to find a cluster of the set of nodes S such that nodes of S participate in many instances of M and avoid cutting instances of M (that is only a subset of nodes in instances of M appears in S).</p> + </li> + <li> + <p>Mathematically, the aim is to minimise the motif conductance metric given as <em>cut<sub>M</sub>(S, S’) / min[vol<sub>M</sub>(S), vol<sub>M</sub>(S’)]</em> where <em>S’</em> is complement of <em>S</em>, <em>cut<sub>M</sub>(S, S’)</em> = number of instances of M which have atleast one node from both <em>S</em> and <em>S’</em> and <em>vol<sub>M</sub>(S)</em> = Number of nodes in instances of M that belong only to S.</p> + </li> + <li> + <p>Solving the above equation is computationally infeasible and an approximate solution is proposed using eigenvalues and matrices.</p> + </li> + <li> + <p>The approximate solution is easy to implement, efficient and guaranteed to find clusters that are at most a quadratic factor away from the optimal.</p> + </li> +</ul> + +<h2 id="algorithm">Algorithm</h2> + +<ul> + <li> + <p>Given the network and motif M, form a motif adjacency matrix W<sub>M</sub> where W<sub>M</sub>(i, j) is the number of instances of M that contains i and j.</p> + </li> + <li> + <p>Compute spectral ordering of the nodes from normalized motif laplacian matrix.</p> + </li> + <li> + <p>Compute prefix set of spectral ordering with small motif conductance.</p> + </li> +</ul> + +<h2 id="scalability">Scalability</h2> + +<ul> + <li>Worst case <em>O(m<sup>1.5</sup>)</em>, based on experiments <em>O(m<sup>1.2</sup>)</em> where <em>m</em> is the number of edges.</li> +</ul> + +<h2 id="advantages">Advantages</h2> + +<ul> + <li> + <p>Applicable to directed, undirected and weighted graphs (allows for negative edge weights as well).</p> + </li> + <li> + <p>In case the motif is not known beforehand, the framework can be used to compute significant motifs.</p> + </li> + <li> + <p>The proposed framework unifies the two fundamental tools of network science (motif analysis and network partitioning) along with some worst-case guarantees for the approximations employed and can be extended to identify higher order modular organization of networks.</p> + </li> +</ul> + + + + + + Network Motifs - Simple Building Blocks of Complex Networks + + 2017-11-12T00:00:00-05:00 + /site/2017/11/12/Network Motifs-Simple Building Blocks of Complex Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper presents the concept of “network motifs” to understand the structural design of a network or a graph.</li> + <li><a href="http://science.sciencemag.org/content/298/5594/824">Link to the paper</a></li> +</ul> + +<h2 id="idea">Idea</h2> + +<ul> + <li> + <p>A network motif is defined as “a pattern of inter-connections occurring in complex networks in numbers that are significantly higher than those in randomized networks”.</p> + </li> + <li> + <p>In the practical setting, given an input network, we first create randomized networks which have same single node characteristics (like a number of incoming and outgoing edges) as the input network.</p> + </li> + <li> + <p>The patterns that occur at a much higher frequency in the input graph (than the randomized graphs) are reported as motifs.</p> + </li> + <li> + <p>More specifically, the patterns for which the probability of appearing in a randomized network an equal or more number of times than in the real network is lower than a cutoff value (say 0.01).</p> + </li> +</ul> + +<h2 id="motivation">Motivation</h2> + +<ul> + <li> + <p>Real-life networks exhibit properties like “small world” property ( the majority of nodes are within a distance of fewer than 7 hops from each other) and “scale-free” property (fraction of nodes having k edges decays as a power-law).</p> + </li> + <li> + <p>Motifs are one such structural property that is exhibited by networks in biochemistry, neurobiology, ecology, and engineering. Further, motifs shared by graphs of different domains are different which hints at the usefulness of motifs as a fundamental structural property of the graph and relates to the process of evolution of the graph.</p> + </li> +</ul> + + + + + Word Representations via Gaussian Embedding + + 2017-11-05T00:00:00-04:00 + /site/2017/11/05/Word Representations via Gaussian Embedding + <h2 id="introduction">Introduction</h2> + +<ul> + <li>Existing word embedding models like <a href="https://gist.github.com/shagunsodhani/176a283e2c158a75a0a6">Skip-Gram</a>, <a href="https://gist.github.com/shagunsodhani/efea5a42d17e0fcf18374df8e3e4b3e8">GloVe</a> etc map words to fixed sized vectors in a low dimensional vector space.</li> + <li>This fixed point setting cannot capture uncertainty about representation.</li> + <li>Further, these fixed point vectors are compared with measures like dot product and cosine similarity which are not suitable for capturing asymmetric properties like textual entailment and inclusion.</li> + <li>The paper proposes to learn Gaussian function embeddings (with diagonal covariance) for the word vectors.</li> + <li>This way, the words are mapped to soft regions in the embedding space which enables modeling uncertainty and asymmetric properties like inclusion and uncertainty.</li> + <li><a href="https://arxiv.org/abs/1412.6623">Link to the paper</a></li> + <li><a href="https://github.com/seomoz/word2gauss">Implementation</a></li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li>KL divergence is used as the asymmetric distance function for comparing the distributions.</li> + <li>Unlike the Word2Vec model, the proposed model uses ranking-based loss.</li> +</ul> + +<h3 id="similarity-measures-used">Similarity Measures used</h3> + +<ul> + <li> + <p><strong>Symmetric Similarity</strong></p> + </li> + <li>For two gaussian distributions, <em>P<sub>i</sub></em> and <em>P<sub>j</sub></em>, compute the inner product <em>E(P<sub>i</sub>, P<sub>j</sub>)</em> as <em>N(0; mean<sub>i</sub> - mean<sub>j</sub>, sigma<sub>i</sub> + sigma<sub>j</sub>)</em>.</li> + <li>Compute the gradient of <em>mean</em> and <em>sigma</em> with respect to <em>log(E)</em>.</li> + <li> + <p>The resulting loss function can be interpreted as pushing the means closer which encouraging the two gaussians to be more concentrated.</p> + </li> + <li> + <p><strong>Asymmetric Similarity</strong></p> + </li> + <li>Use KL divergence to encode the context distribution.</li> + <li>The benefit over the symmetric setting is that now entailment type relations can also be modeled.</li> + <li>For example, a low KL divergence from x to y indicates that y can be encoded as x or that y “entails” x.</li> +</ul> + +<h2 id="learning">Learning</h2> + +<ul> + <li>One of the two notions of similarity is chosen and max-margin is used as the loss function.</li> + <li>Mean is regularized by adding a simple constraint on the L2-norm.</li> + <li>For covariance matrix, the eigenvalues are constrained to lie within a hypercube. This ensures that the positive-definite property of the covariance matrix is maintained while having a constraint on the size.</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li>Polysemous words have higher variance in their word embeddings as compared to specific words.</li> + <li>KL divergence (with diagonal covariance) outperforms other models.</li> + <li>Simple tree hierarchies can also be modeled by embedding into the Gaussian space. A Gaussian is created for each node with randomly initialized mean and the same set of embeddings is used for nodes and context.</li> + <li>For word similarity benchmarks, embeddings with spherical covariance have a slight edge over embeddings with diagonal covariance and outperform the Skip-Gram model in all the cases.</li> +</ul> + +<h2 id="future-work">Future Work</h2> + +<ul> + <li>Use combinations of low rank and diagonal matrices for covariances.</li> + <li>Improved optimisation strategies.</li> + <li>Trying other distributions like Student’s-t distribution.</li> +</ul> + + + + + HARP - Hierarchical Representation Learning for Networks + + 2017-10-28T00:00:00-04:00 + /site/2017/10/28/HARP - Hierarchical Representation Learning for Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>HARP is an architecture to learn low-dimensional node embeddings by compressing the input graph into smaller graphs.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1706.07845">Link to the paper</a>.</p> + </li> + <li> + <p>Given a graph <em>G = (V, E)</em>, compute a series of successively smaller (coarse) graphs <em>G<sub>0</sub>, …, G<sub>L</sub></em>. Learn the node representations in <em>G<sub>L</sub></em> and successively refine the embeddings for larger graphs in the series.</p> + </li> + <li> + <p>The architecture is independent of the algorithms used to embed the nodes or to refine the node representations.</p> + </li> + <li> + <p><strong>Graph coarsening technique that preserves global structure</strong></p> + + <ul> + <li> + <p>Collapse edges and stars to preserve first and second order proximity.</p> + </li> + <li> + <p><strong>Edge collapsing</strong> - select the subset of <em>E</em> such that no two edges are incident on the same vertex and merge their nodes into a single node and merge their edges as well.</p> + </li> + <li> + <p><strong>Star collapsing</strong> - given star structure, collapse the pairs of neighboring nodes (of the central node).</p> + </li> + <li> + <p>In practice, first apply star collapsing, followed by edge collapsing.</p> + </li> + </ul> + </li> + <li> + <p><strong>Extending node representation from coarse graph to finer graph</strong></p> + + <ul> + <li> + <p>Lets say <em>node1</em> and <em>node2</em> were merged into <em>node12</em> during coarsening. First copy the representation of <em>node12</em> into <em>node1</em>, <em>node2</em>.</p> + </li> + <li> + <p>Additionally, if hierarchical softmax was used, extend the B-tree such that <em>node12</em> is replaced by 2 child nodes <em>node1</em> and <em>node2</em>.</p> + </li> + <li> + <p>Time complexity for HARP + DeepWalk is <em>O(number of walks * |V|)</em> while for HARP + LINE is <em>O(number of iterations * |E|)</em>.</p> + </li> + <li> + <p>The asymptotic complexity remains the same as the HARP-less version for the two cases.</p> + </li> + </ul> + </li> + <li> + <p>Multilabel classification task shows that HAR improves all the node embedding technique with gains up to 14%.</p> + </li> +</ul> + + + + + Swish - a Self-Gated Activation Function + + 2017-10-22T00:00:00-04:00 + /site/2017/10/22/Swish-A self gated activation function + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a new activation function called Swish with formulation <em>f(x) = x.sigmod(x)</em> and its parameterised version called Swish-β where <em>f(x, β) = 2x.sigmoid(β.x)</em> and β is a training parameter.</p> + </li> + <li> + <p>The paper shows that Swish is consistently able to outperform RELU and other activations functions over a variety of datasets (CIFAR, ImageNet, WMT2014) though by small margins only in some cases.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1710.05941">Link to the paper</a></p> + </li> +</ul> + +<h2 id="properties-of-swish">Properties of Swish</h2> + +<ul> + <li> + <p><img src="https://raw.githubusercontent.com/shagunsodhani/papers-I-read/master/assets/Swish/plot.png" alt="Plot Of Swish" /></p> + </li> + <li> + <p>Smooth, non-monotonic function.</p> + </li> + <li> + <p>Swish-β can be thought of as a smooth function that interpolates between a linear function and RELU.</p> + </li> + <li> + <p>Uses self-gating mechanism (that is, it uses its own value to gate itself). Gating generally uses multiple scalar inputs but since self-gating uses a single scalar input, it can be used to replace activation functions which are generally pointwise.</p> + </li> + <li> + <p>Being unbounded on the x&gt;0 side, it avoids saturation when training is slow due to near 0 gradients.</p> + </li> + <li> + <p>Being bounded below induces a kind of regularization effect as large, negative inputs are forgotten.</p> + </li> + <li> + <p>Since the Swish function is smooth, the output landscape and the loss landscape are also smooth. A smooth landscape should be more traversable and less sensitive to initialization and learning rates.</p> + </li> +</ul> + +<h2 id="criticism">Criticism</h2> + +<ul> + <li>Swish is much more complicated than ReLU (when weighted against the small improvements that are provided) so it might not end up with as strong an adoption as ReLU.</li> +</ul> + + + + + Reading Wikipedia to Answer Open-Domain Questions + + 2017-10-15T00:00:00-04:00 + /site/2017/10/15/Reading Wikipedia to Answer Open-Domain Questions + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper presents a new machine comprehension dataset for question answering in real life setting (say when interacting with Cortana/Siri).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1704.00051">Link to the paper</a></p> + </li> +</ul> + +<h2 id="unique-aspects-of-the-dataset">Unique Aspects of the dataset</h2> + +<ul> + <li> + <p>Existing machine comprehension (MC) datasets are either too small or synthetic (with a distribution different from that or real-questions posted by humans). MARCO questions are sampled from real, anonymized user queries.</p> + </li> + <li> + <p>Most datasets would provide a comparatively small and clean context to answer the question. In MARCO, the context documents (which may or may not contain the answer) are extracted using Bing from real-world documents. As such the questions and the context documents are noisy.</p> + </li> + <li> + <p>In general, the answer to the questions are restricted to an entity or text span within the document. In case of MARCO, the human judges are encouraged to generate complete sentences as answers.</p> + </li> +</ul> + +<h2 id="dataset-description">Dataset Description</h2> + +<ul> + <li> + <p>First release consists of 100K questions with the aim of releasing 1M questions in the future releases.</p> + </li> + <li> + <p>All questions are tagged with segment information.</p> + </li> + <li> + <p>A subset of questions has multiple answers and another subset has no answers at all.</p> + </li> + <li> + <p>Each record in the dataset contains the following information:</p> + + <ul> + <li><strong>Query</strong> - The actual question</li> + <li><strong>Passage</strong> - Top 10 contextual passages extracted from web search engine (which may or may not contain the answer to the question).</li> + <li><strong>Document URLs</strong> - URLs for the top documents (which are the source of the contextual passages).</li> + <li><strong>Answer</strong> - Answer synthesised by human evaluators.</li> + <li><strong>Segment</strong> - Query type, description, neumeric, entity, location, person.</li> + </ul> + </li> +</ul> + +<h2 id="experimental-results">Experimental Results</h2> + +<ul> + <li> + <p>Metrics</p> + + <ul> + <li>Accuracy and precision/recall for numeric questions</li> + <li>ROGUE-L/paraphrasing aware evaluation framework for long, textual answers.</li> + </ul> + </li> + <li> + <p>Among generative models, Memory Networks performed better than seq-to-seq.</p> + </li> + <li> + <p>In the cloze-style test, <a href="https://arxiv.org/abs/1609.05284">ReasoNet</a> achieved an accuracy of approx. 59% while <a href="ASR">Attention Sum Reader</a> achieved an accuracy of approx 55%.</p> + </li> + <li> + <p>Current QA systems (including the ones using memory and attention) derive their power from supervised data and are very different from how humans do reasoning.</p> + </li> + <li> + <p>Imagenet dataset pushed the state-of-the-art performance on object classification to beyond human accuracy. Similar was the case with speech recognition dataset from DARPA which led to the advancement of speech recognition. Having a large, diverse and human-like questions dataset is a fundamental requirement to advance the field and the paper aims to provide just the right kind of dataset.</p> + </li> +</ul> + + + + + Task-Oriented Query Reformulation with Reinforcement Learning + + 2017-10-01T00:00:00-04:00 + /site/2017/10/01/Task-Oriented Query Reformulation with Reinforcement Learning + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper introduces a query reformulation system that rewrites a query to maximise the number of “relevant” documents that are extracted from a given black box search engine.</li> + <li>A Reinforcement Learning (RL) agent selects the terms that are to be added to the reformulated query and the rewards are decided on the basis of document recall.</li> + <li><a href="https://arxiv.org/abs/1704.04572">Link to the paper</a></li> + <li><a href="https://github.com/nyu-dl/QueryReformulator">Implementation</a></li> +</ul> + +<h2 id="key-aspect">Key Aspect</h2> + +<ul> + <li>The underlying problem is as follows: when the end user makes a query to a search engine, the engine often relies on word matching techniques to perform retrieval. This means relevant documents could be missed if there is no exactly matching words between the query and the document.</li> + <li>This problem can be handled at two levels: First, the search engine itself takes care of query semantics. Alternatively, we assume the search engine to be dumb and instead have a system in place that can improve the original queries (automatic query reformulation).</li> + <li>The paper takes the latter approach and expands the original query by adding terms from the set of retrieved documents (pseudo relevance feedback).</li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li>TREC - Complex Answer Retrieval (TREC-CAR)</li> + <li>Jeopardy Q&amp;A dataset</li> + <li>Microsoft Academic (MSA) dataset - created by the authors using papers crawled from Microsoft Academic API</li> +</ul> + +<h2 id="framework">Framework</h2> + +<ul> + <li>Query Reformulation task is modeled as an RL problem where: + <ul> + <li>Environment is the search engine.</li> + <li>Actions are whether a word is to be added to the query or not and if yes, then what word is added.</li> + <li>Reward is the retrieval accuracy.</li> + </ul> + </li> + <li>The input to the system is a query q<sub>0</sub> consisting of a sequence of words w<sub>1</sub>, …, w<sub>n</sub> and a candidate term t<sub>i</sub> with some context words.</li> + <li>Candidate terms are all the terms that appear in the original query and the documents retrieved using the query.</li> + <li>The words are mapped to vectors and then a fixed size representation is obtained for the sequence using CNN’s or RNNs.</li> + <li>Similarly, a representation is obtained for the candidate words by feeding them and their context words to the CNN or RNNs.</li> + <li>Finally, a sigmoidal score is computed for all the candidate words.</li> + <li>An RNN sequentially applies this model to emit query words till an end token is emitted.</li> + <li>Vocabulary is used only from the extracted documents and not the entire vocabulary set, to keep the inference fast.</li> +</ul> + +<h2 id="training">Training</h2> + +<ul> + <li>The model is trained using REINFORCE algorithm which minimizes the <em>C<sub>a</sub> = (R − R~) * sum(log(P(t|q))) where R~ is the baseline.</em></li> + <li>Value network minimises <em>C<sub>b</sub> = &amp;\alpha(||R-R~||<sup>2</sup>)</em></li> + <li><em>C<sub>a</sub></em> and <em>C<sub>b</sub></em> are minimised using SGD.</li> + <li>An entropy regulation term is added to prevent the probability distribution from reaching the peak.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="baseline-methods">Baseline Methods</h3> + +<ul> + <li> + <p><strong>Raw</strong> - Original query is fed to the search engine without any modification.</p> + </li> + <li> + <p><strong>Pseudo-Relevance Feedback (PRF-TFIDF)</strong> - The query is expanded using the top-N TF-IDF terms.</p> + </li> + <li> + <p><strong>PRF-Relevance Model (PRF-RM)</strong> - Probability of adding token <em>t</em> to the query <em>q0</em> is given by <em>P(t|q0) = (1 − λ)P′(t|q0) + λ sum (P(d)P(t|d)P(q0|d))</em></p> + </li> +</ul> + +<h3 id="proposed-methods">Proposed Methods</h3> + +<ul> + <li><strong>Supervised Learning</strong> + <ul> + <li>Assumes that the query words contribute indepently to the query retrival performace. (Too strong an assumption).</li> + <li>A term is marked as relevant if <em>(R(new_query) - R(old_query))/R(old_query) &gt; 0.005</em></li> + </ul> + </li> + <li><strong>Reinforcement Learning</strong> + <ul> + <li>RL-RNN/CNN - RL Framework + RNN/CNN to encode the input features.</li> + <li>RL-RNN-SEQ - Add a sequential generator.</li> + </ul> + </li> + <li><strong>Metrics</strong> + <ul> + <li>Recall@K</li> + <li>Precision@K</li> + <li>Mean Average Precision@K</li> + </ul> + </li> + <li> + <p><strong>Reward</strong> - The paper uses Recall@K as a reward when training the RL-based models with the argument that the “metric has shown to be effective in improving the other metrics as well”, without any justification though.</p> + </li> + <li> + <p><strong>SL-Oracle</strong> - classifier that perfectly selects terms that will increase performance based on the supervised learning approach.</p> + </li> + <li><strong>RL-Oracle</strong> - Produces a conservative upper-bound for the performance of the RL Agent. It splits the test data into N subsets and trains an RL agent for each subset. Then, the reward is averaged over all the N subsets.</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li>Reformulation based methods &gt; original query</li> + <li>RL methods &gt; Supervised methods &gt; unsupervised methods</li> + <li>RL-RNN-SEQ performs slightly worse than RL-RNN but is much faster (as it produces shorter queries).</li> + <li>RL-based model benefits from more candidate terms while the classical PRF method quickly saturates.</li> +</ul> + +<h2 id="comments">Comments</h2> + +<ul> + <li>Interestingly, for each raw query, they carried out the reformulation step just once and not multiple times. The number of times a query is reformulated could also have become a part of the RL framework.</li> +</ul> + + + + + Refining Source Representations with Relation Networks for Neural Machine Translation + + 2017-09-22T00:00:00-04:00 + /site/2017/09/22/Refining Source Representations with Relation Networks for Neural Machine Translation + <h2 id="introduction">Introduction</h2> + +<ul> + <li>The paper introduces Relation Network (RN) that refines the encoding representation of the given source document (or sentence).</li> + <li>This refined source representation can then be used in Neural Machine Translation (NMT) systems to counter the problem of RNNs forgetting old information.</li> + <li><a href="https://arxiv.org/abs/1709.03980">Link to the paper</a></li> +</ul> + +<h2 id="limitations-of-existing-nmt-models">Limitations of existing NMT models</h2> + +<ul> + <li>The RNN encoder-decoder architecture is the standard choice for NMT systems. But the RNNs are prone to forgetting old information.</li> + <li>In NMT models, the attention is modeled in the unit of words while the use of phrases (instead of words) would be a better choice.</li> + <li>While NMT systems might be able to capture certain relationships between words, they are not explicitly designed to capture such information.</li> +</ul> + +<h2 id="contributions-of-the-paper">Contributions of the paper</h2> + +<ul> + <li>Learn the relationship between the source words using the context (neighboring words).</li> + <li>Relation Networks (RNs) build pairwise relations between source words using the representations generated by the RNNs. The RN would sit between the encoder and the attention layer of the encoder-decoder framework thereby keeping the main architecture unaffected.</li> +</ul> + +<h2 id="relation-network">Relation Network</h2> + +<ul> + <li>Neural network which is desgined for relational reasoning.</li> + <li>Given a set of inputs * O = o<sub>1</sub>, …, o<sub>n</sub> *, RN is formed as a composition of inputs: + RN(O) = f(sum(g(o<sub>i</sub>, o<sub>j</sub>))), f and g are functions used to learn the relations (feed forward networks)</li> + <li><em>g</em> learns how the objects are related hence the name “relation”.</li> + <li><strong>Components</strong>: + <ul> + <li>CNN Layer + <ul> + <li>Extract information from the words surrounding the given word (context).</li> + <li>The final output of this layer is the sequence of vectors for different kernel width.</li> + </ul> + </li> + <li>Graph Propagation (GP) Layer + <ul> + <li>Connect all the words with each other in the form of a graph.</li> + <li>Each output vector from the CNN corresponds to a node in the graph and there is an edge between all possible pair of nodes.</li> + <li>The information flows between the nodes of the graph in a message passing sort of fashion (graph propagation) to obtain a new set of vectors for each node.</li> + </ul> + </li> + <li>Multi-Layer Perceptron (MLP) Layer + <ul> + <li>The representation from the GP Layer is fed to the MLP layer.</li> + <li>The layer uses residual connections from previous layers in form of concatenation.</li> + </ul> + </li> + </ul> + </li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li>IWSLT Data - 44K sentences from tourism and travel domain.</li> + <li>NIST Data - 1M Chinese-English parallel sentence pairs.</li> +</ul> + +<h2 id="models">Models</h2> + +<ul> + <li>MOSES - Open source translation system - http://www.statmt.org/moses/</li> + <li>NMT - Attention based NMT</li> + <li>NMT+ - NMT with improved decoder</li> + <li>TRANSFORMER - Google’s new NMT</li> + <li>RNMT+ - Relation Network integrated with NMT+</li> +</ul> + +<h2 id="evaluation-metric">Evaluation Metric</h2> + +<ul> + <li>case-insensitive 4-gram BLEU score</li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li>As sentences become larger (more than 50 words), RNMT clearly outperforms other baselines.</li> + <li>Qualitative evaluation shows that RNMT+ model captures the word alignment better than the NMT+ models.</li> + <li>Similarly, NMT+ system tends to miss some information from the source sentence (more so for longer sentences). While both CNNs and RNNs are weak at capturing long-term dependency, using the relation layer mitigates this issue to some extent.</li> +</ul> + + + + + Pointer Networks + + 2017-08-27T00:00:00-04:00 + /site/2017/08/27/Pointer Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>The paper introduces a novel architecture that generates an output sequence such that the elements of the output sequence are discrete tokens corresponding to positions in the input sequence.</p> + </li> + <li> + <p>Such a problem can not be solved using <a href="https://gist.github.com/shagunsodhani/a2915921d7d0ac5cfd0e379025acfb9f">Seq2Seq</a> or Neural Turing Machines as the size of the output softmax is variable (as it depends on the size of the input sequence).</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1506.03134">Link to the paper</a></p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>Traditional attention-base sequence-to-sequence models compute an attention vector for each step of the output decoder and use that to blend the individual context vectors of the input into a single, consolidated attention vector. This attention vector is used to compute a fixed size softmax.</p> + </li> + <li> + <p>In Pointer Nets, the normalized attention vector (over all the tokens in the input sequence) is normalized and treated as the softmax output over the input tokens.</p> + </li> + <li> + <p>So Pointer Net is a very simple modification of the attention model.</p> + </li> +</ul> + +<h2 id="application">Application</h2> + +<ul> + <li> + <p>Any problem where the size of the output depends on the size of the input because of which fixed length softmax is ruled out.</p> + </li> + <li> + <p>eg combinatorial problems such as planar convex hull where the size of the output would depend on the size of the input.</p> + </li> +</ul> + +<h2 id="evaluation">Evaluation</h2> + +<ul> + <li> + <p>The paper considers the following 3 problems:</p> + + <ul> + <li>Convex Hull</li> + <li>Delaunay triangulations</li> + <li>Travelling Salesman Problem (TSP)</li> + </ul> + </li> + <li> + <p>Since some of the problems are NP hard, the paper considers approximate solutions whereever the exact solutions are not feasible to compute.</p> + </li> + <li> + <p>The authors used the exact same architecture and model parameters of all the instances of the 3 problems to show the generality of the model.</p> + </li> + <li> + <p>The proosed Pointer Nets outperforms LSTMs and LSTMs with attention and can generalise quite well for much larger sequences.</p> + </li> + <li> + <p>Interestingly, the order in which the inputs are fed to the system affects its performance. The authors discussed this apsect in their subsequent paper titled <a href="https://arxiv.org/pdf/1511.06391v4.pdf">Order Matters: Sequence To Sequence for Sets</a></p> + </li> +</ul> + + + + + Learning to Compute Word Embeddings On the Fly + + 2017-08-21T00:00:00-04:00 + /site/2017/08/21/Learning to Compute Word Embeddings On the Fly + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Word based language models suffer from the problem of rare or Out of Vocabulary (OOV) words.</p> + </li> + <li> + <p>Learning representations for OOV words directly on the end task often results in poor representation.</p> + </li> + <li> + <p>The alternative is to replace all the rare words with a single, unique representation (loss of information) or use character level models to obtain word representations (they tend to miss on the semantic relationship).</p> + </li> + <li> + <p>The paper proposes to learn a network that can predict the representations of words using auxiliary data (referred to as definitions) such as dictionary definitions, Wikipedia infoboxes, the spelling of the word etc.</p> + </li> + <li> + <p>The auxiliary data encoders are trained jointly with the end task to ensure that word representations align with the requirements of the end task.</p> + </li> +</ul> + +<h2 id="approach">Approach</h2> + +<ul> + <li> + <p>Given a rare word <em>w</em>, let <em>d(w) = &lt;x<sub>1</sub>, x<sub>2</sub>…&gt;</em> denote its defination where <em>x<sub>i</sub></em> are words.</p> + </li> + <li> + <p><em>d(w)</em> is fed to a <em>defination reader</em> network <em>f</em> (LSTM) and its last state is used as the <em>defination embedding e<sub>d</sub>(w)</em></p> + </li> + <li> + <p>In case <em>w</em> has multiple definitions, the embeddings are combined using mean pooling.</p> + </li> + <li> + <p>The approach can be extended to in-vocabulary words as well by using the <em>definition embedding</em> of such words to update their original embeddings.</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li>Auxiliary data sources + <ul> + <li>Word definitions from WordNet</li> + <li>Spelling of words</li> + </ul> + </li> + <li> + <p>The proposed approach was tested on following tasks:</p> + + <ul> + <li>Extractive Question Answering over SQuAD + <ul> + <li>Base model from <a href="https://arxiv.org/abs/1611.01604">Xiong et al. 2016</a></li> + </ul> + </li> + <li>Entailment Prediction over SNLI corpus + <ul> + <li>Base models from <a href="https://nlp.stanford.edu/pubs/snli_paper.pdf">Bowman et al. 2015</a> and <a href="https://arxiv.org/abs/1609.06038">Chen et al. 2016</a></li> + </ul> + </li> + <li>One Billion Words Language Modelling</li> + </ul> + </li> + <li> + <p>For all the tasks, models using both spelling and dictionary (SD) outperformed the model using just one.</p> + </li> + <li>While SD does not outperform the Glove model (with full vocabulary), it does bridge the performance gap significantly.</li> +</ul> + +<h2 id="future-work">Future Work</h2> + +<ul> + <li> + <p>Multi-token words like “San Francisco” are not accounted for now.</p> + </li> + <li> + <p>The model does not handle the rare words which appear in the definition and just replaces them by the <UNK> token. Making the model recursive would be a useful addition.</UNK></p> + </li> +</ul> + + + + + R-NET - Machine Reading Comprehension with Self-matching Networks + + 2017-08-07T00:00:00-04:00 + /site/2017/08/07/R-NET - Machine Reading Comprehension with Self-matching Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>R-NET is an end-to-end trained neural network model for machine comprehension.</p> + </li> + <li> + <p>It starts by matching the question and the given passage (using gated attention based RNN) to obtain question-aware passage representation.</p> + </li> + <li> + <p>Next, it uses a self-matching attention mechanism to refine the passage representation by matching the passage against itself.</p> + </li> + <li> + <p>Lastly, it uses pointer networks to determine the position of the answer in the passage.</p> + </li> + <li> + <p><a href="https://www.microsoft.com/en-us/research/publication/mrc/">Link to the paper</a></p> + </li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li> + <p>SQuAD</p> + </li> + <li> + <p>MS-MARCO</p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>Question / Passage Encoder</p> + + <ul> + <li>Concatenate the word level and character level embeddings for each word and feed into a bidirectional GRU to obtain question and passage representation.</li> + </ul> + </li> + <li> + <p>Gated Attention based RNN</p> + + <ul> + <li> + <p>Given question and passage representation, sentence pair representation is generated via soft-alignment of the words in the question and in the passage.</p> + </li> + <li> + <p>The newly added gate captures the relation between the question and the current passage word as only some parts of the passage are relevant for answering the given question.</p> + </li> + </ul> + </li> + <li> + <p>Self Matching Attention</p> + + <ul> + <li> + <p>The passage representation obtained so far would not capture most of the context.</p> + </li> + <li> + <p>So the current representation is matched against itself so as to collect evidence from the entire passage and encode the evidence relevant to the current passage word and question.</p> + </li> + </ul> + </li> + <li> + <p>Output Layer</p> + + <ul> + <li> + <p>Use pointer network (initialized using attention pooling over answer representation) to predict the position of the answer.</p> + </li> + <li> + <p>Loss function is the sum of negative log probabilities of start and end positions.</p> + </li> + </ul> + </li> + <li> + <p>Results</p> + + <ul> + <li> + <p>R-NET is ranked second on <a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD Leaderboard</a> as of 7th August, 2017 and achieves best-published results on MS-MARCO dataset.</p> + </li> + <li> + <p>Using ideas like sentence ranking, using syntax information performing multihop inference and augmenting question dataset (using seqToseq network) do not help in improving the performance.</p> + </li> + </ul> + </li> +</ul> + + + + + ReasoNet - Learning to Stop Reading in Machine Comprehension + + 2017-07-24T00:00:00-04:00 + /site/2017/07/24/ReasoNet - Learning to Stop Reading in Machine Comprehension + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>In the domain of machine comprehension, making multiple passes over the given document is an effective technique to extract the relation between the given passage, question and answer.</p> + </li> + <li> + <p>Unlike previous approaches, which perform a fixed number of passes over the passage, Reasoning Network (ReasoNet) uses reinforcement learning (RL) to decide how many times a document should be read.</p> + </li> + <li> + <p>Every time the document is read, ReasoNet determines whether the document should be read again or has the termination state been reached. If termination state is reached, the answer module is triggered to generate the answer.</p> + </li> + <li> + <p>Since the termination state is discrete and not connected to the final output, RL approach is used.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1609.05284">Link to the paper</a></p> + </li> +</ul> + +<h2 id="datasets">Datasets</h2> + +<ul> + <li> + <p>CNN, DailyMail Dataset</p> + </li> + <li> + <p>SQuAD</p> + </li> + <li> + <p>Graph Reachability Dataset</p> + <ul> + <li>2 synthetic datasets to test if the network can answer questions like “Is node_1 connected to node_12”?</li> + </ul> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p><strong>Memory (M)</strong> - Comprises of the vector representation of the document and the question (encoded using GRU or other RNNs).</p> + </li> + <li> + <p><strong>Attention</strong> - Attention vector (<strong>x<sub>t</sub></strong>) is a function of current internal state <strong>s<sub>t</sub></strong> and external memory <strong>M</strong>. The state and memory are passed through FCs and fed to a similarity function.</p> + </li> + <li> + <p><strong>Internal State (s<sub>t</sub>)</strong> - Vector representation of the question state computed by a RNN using the previous internal state and the attention vector <strong>x<sub>t</sub></strong></p> + </li> + <li> + <p><strong>Termination Gate (T<sub>t</sub>)</strong> - Uses a logistic regression model to generate a random binary variable using the current internal state <strong>s<sub>t</sub></strong>.</p> + </li> + <li><strong>Answer</strong> - Answer module is triggered when <strong>T<sub>t</sub> = 1</strong>. + <ul> + <li>For CNN and DailyMail, a linear projection of GRU outputs is used to predict the answer from candidate entities.</li> + <li>For SQuAD, the position of the first and the last word from the answer span are predicted.</li> + <li>For Graph Reachability, a logistic regression module is used to predict yes/no as the answer.</li> + </ul> + </li> + <li> + <p><strong>Reinforcement Learning</strong> - For the RL setting, reward at time <strong>t</strong>, <strong>r<sub>t</sub></strong> = 1 if <strong>T<sub>t</sub></strong> = 1 and answer is correct. Otherwise <strong>r<sub>t</sub> = 0</strong></p> + </li> + <li> + <p><strong>Workflow</strong> - Given a passage p, query q and answer a:</p> + + <ul> + <li> + <p>Extract memory using p</p> + </li> + <li> + <p>Extract initial hidden state using q</p> + </li> + <li> + <p>ReasoNet executes all possible episodes that can be enumerated by setting an upper limit on the number of passes.</p> + </li> + <li> + <p>These episodes generate actions and answers that are used to train the ReasoNet.</p> + </li> + </ul> + </li> + <li> + <p><strong>Result</strong></p> + + <ul> + <li> + <p>CNN, DailyMail Corpus</p> + + <ul> + <li>ReasoNet outperforms all the baselines which use fixed number of reasoning steps and could benefit by capturing the word alignment signals between query and passage.</li> + </ul> + </li> + <li> + <p>SQuAD</p> + + <ul> + <li>At the time of submission, ReasoNet was ranked 2nd on the <a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD leaderboard</a> and as of 9th July 2017, it is ranked 4th.</li> + </ul> + </li> + <li> + <p>Graph Reachability Dataset</p> + + <ul> + <li> + <p>ReasoNet - Standard ReasoNet as described above.</p> + </li> + <li> + <p>ReasoNet-Last - Use the prediction from the <strong>T<sub>max</sub></strong></p> + </li> + <li> + <p>ReasoNet &gt; ReasoNet-Last &gt; Deep LSTM Reader</p> + </li> + <li> + <p>ReasoNet converges faster than ReasoNet-Last indicating that the terminate gate is useful.</p> + </li> + </ul> + </li> + </ul> + </li> + <li> + <p><strong>Notes</strong></p> + + <ul> + <li>As such there is nothing discouraging the ReasoNet to make unnecessary passes over the passage.</li> + <li>In fact, the modal value of the number of passes = upper bound on the number of passes.</li> + <li>This effect is more prominent for large graph indicating that the ReasoNet may try to play safe by performing extra passes.</li> + <li>It would be interesting to see if the network can be discouraged from making unnecessary passed by awarding a small negative reward for each pass.</li> + </ul> + </li> +</ul> + + + + + Principled Detection of Out-of-Distribution Examples in Neural Networks + + 2017-07-17T00:00:00-04:00 + /site/2017/07/17/Principled Detection of Out of Distribution Examples in Neural Networks + <h2 id="problem-statement">Problem Statement</h2> + +<ul> + <li> + <p>Given a pre-trained neural network, which is trained using data from some distribution P (referred to as in-distribution data), the task is to detect the examples coming from a distribution Q which is different from P (referred to as out-of-distribution data).</p> + </li> + <li> + <p>For example, if a digit recognizer neural network is trained using MNIST images, an out-of-distribution example would be images of animals.</p> + </li> + <li> + <p>Neural Networks can make high confidence predictions even in such cases where the input is unrecognisable or irrelevant.</p> + </li> + <li> + <p>The paper proposes <em>ODIN</em> which can detect such out-of-distribution examples without changing the pre-trained model itself.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1706.02690">Link to the paper</a></p> + </li> +</ul> + +<h2 id="odin">ODIN</h2> + +<ul> + <li> + <p>Uses 2 major techniques</p> + + <ul> + <li><strong>Temperature Scaling</strong> + <ul> + <li> + <p>Softmax classifier for the classification network can be written as:</p> + + <p><em>p<sub>i</sub>(x, T) = exp(f<sub>i</sub>(x)/T) / sum(exp(f<sub>j</sub>(x)/T))</em></p> + </li> + </ul> + + <p>where <em>x</em> is the input, <em>p</em> is the softmax probability and <em>T</em> is the temperature scaling parameter.</p> + + <ul> + <li>Increasing <em>T</em> (up to some extent) boosts the performance in distinguishing in-distribution and out-of-distribution examples.</li> + </ul> + </li> + <li><strong>Input Preprocessing</strong> + <ul> + <li> + <p>Add small perturbations to the input (image) before feeding it into the network.</p> + </li> + <li> + <p><em>x_perturbed = x - ε * sign(-δ<sub>x</sub>log(p<sub>y</sub>(x, T)))</em></p> + </li> + </ul> + + <p>where ε is the perturbation magnitude</p> + + <ul> + <li>The perturbations are such that softmax scores between in-distribution and out-of-distribution samples become separable.</li> + </ul> + </li> + </ul> + </li> + <li>Given an input (image), first perturb the input.</li> + <li>Feed the perturbed input to the network to get its softmax score.</li> + <li>If the softmax score is greater than some threshold, mark the input as in-distribution and feed in the unperturbed version of the input to the network for classification.</li> + <li>Otherwise, mark the input as out-of-distribution.</li> + <li>For detailed mathematical treatment, refer section 6 and appendix in the <a href="https://arxiv.org/abs/1706.02690">paper</a></li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>Code available on <a href="https://github.com/ShiyuLiang/odin-pytorch">github</a></p> + </li> + <li> + <p>Models</p> + + <ul> + <li>DenseNet with depth L = 100 and growth rate k = 12</li> + <li>Wide ResNet with depth = 28 and widen factor = 10</li> + </ul> + </li> + <li> + <p>In-Distribution Datasets</p> + + <ul> + <li>CIFAR-10</li> + <li>CIFAR-100</li> + </ul> + </li> + <li> + <p>Out-of-Distribution Datasets</p> + + <ul> + <li>TinyImageNet</li> + <li>LSUN</li> + <li>iSUN</li> + <li>Gaussian Noise</li> + </ul> + </li> + <li> + <p>Metrics</p> + + <ul> + <li>False Positive Rate at 95% True Positive Rate</li> + <li>Detection Error - minimum misclassification probability over all thresholds</li> + <li>Area Under the Receiver Operating Characteristic Curve</li> + <li>Area Under the Precision-Recall Curve</li> + </ul> + </li> + <li> + <p>ODIN outperforms the baseline across all datasets and all models by a good margin.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li>Very simple and straightforward approach with theoretical justification under some conditions.</li> + <li>Limited to examples from Vision so can not judge its applicability for NLP tasks.</li> +</ul> + + + + + Ask Me Anything - Dynamic Memory Networks for Natural Language Processing + + 2017-07-09T00:00:00-04:00 + /site/2017/07/09/Ask Me Anything- Dynamic Memory Networks for Natural Language Processing + <h2 id="introduction">Introduction</h2> + +<ul> + <li> + <p>Dynamic Memory Networks (DMN) is a neural network based general framework that can be used for tasks like sequence tagging, classification, sequence to sequence and question answering requiring transitive reasoning.</p> + </li> + <li> + <p>The basic idea is that all these tasks can be modelled as question answering task in general and a common architecture could be used for solving them.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1506.07285">Link to the paper</a></p> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li>DMN takes as input a document(sentence, story, article etc) and a question which is to be answered given the document.</li> +</ul> + +<h3 id="input-module">Input Module</h3> + +<ul> + <li> + <p>Concatenate all the sentences (or facts) in the document and encode them by feeding the word embeddings of the text to a GRU.</p> + </li> + <li> + <p>Each time a sentence ends, extract the hidden representation of the GRU till that point and use as the encoded representation of the sentence.</p> + </li> +</ul> + +<h3 id="question-module">Question Module</h3> + +<ul> + <li>Similarly, feed the question to a GRU to obtain its representation.</li> +</ul> + +<h3 id="episodic-memory-module">Episodic Memory Module</h3> + +<ul> + <li> + <p>Episodic memory consists of an attention mechanism and a recurrent network with which it updates its memory.</p> + </li> + <li> + <p>During each iteration, the network generates an episode <em>e</em> by attending over the representation of the sentences, question and the previous memory.</p> + </li> + <li> + <p>The episodic memory is updated using the current episode and the previous memory.</p> + </li> + <li> + <p>Depending on the amount of supervision available, the network may perform multiple passes. eg, in the bAbI dataset, some tasks specify how many passes would be needed and which sentence should be attended to in each pass. For others, a fixed number of passes are made.</p> + </li> + <li> + <p>Multiple passes allow the network to perform transitive inference.</p> + </li> +</ul> + +<h3 id="attention-mechanism">Attention Mechanism</h3> + +<ul> + <li> + <p>Given the input representation <em>c</em>, memory <em>m</em> and question <em>q</em>, produce a scalar score using a 2-layer feedforward network, to use as attention mechanism.</p> + </li> + <li> + <p>A separate GRU encodes the input representation and weights it by the attention.</p> + </li> + <li> + <p>Final state of the GRU is fed to the answer module.</p> + </li> +</ul> + +<h3 id="answer-module">Answer Module</h3> + +<ul> + <li>Use a GRU (initialized with the final state of the episodic module) and at each timestep, feed it the question vector, last hidden state of the same GRU and the previously predicted output.</li> +</ul> + +<h3 id="training">Training</h3> + +<ul> + <li>There are two possible losses: + <ul> + <li>Cross-entropy loss of the predicted answer (all datasets)</li> + <li>Cross-entropy loss of the attention supervision (for datasets like bAbI)</li> + </ul> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="question-answering">Question Answering</h3> + +<ul> + <li> + <p>bAbI Dataset</p> + </li> + <li> + <p>For most tasks, DMN either outperforms or performs as good as Memory Networks.</p> + </li> + <li> + <p>For tasks like answering with 2 or 3 supporting facts, DMN lags because of limitation of RNN in modelling long sentences.</p> + </li> +</ul> + +<h3 id="text-classification">Text Classification</h3> + +<ul> + <li> + <p>Stanford Sentiment Treebank Dataset</p> + </li> + <li> + <p>DMN outperforms all the baselines for both binary and fine-grained sentiment analysis.</p> + </li> +</ul> + +<h3 id="sequence-tagging">Sequence Tagging</h3> + +<ul> + <li> + <p>Wall Street Journal Dataset</p> + </li> + <li> + <p>DMN archives state of the art accuracy of 97.56%</p> + </li> +</ul> + +<h2 id="observations">Observations</h2> + +<ul> + <li> + <p>Multiple passes help in reasoning tasks but not so much for sentiment/POS tags.</p> + </li> + <li> + <p>Attention in the case of 2-iteration DMN is more focused than attention in 1-iteration DMN.</p> + </li> + <li> + <p>For 2-iteration DMN, attention in the second iteration focuses only on relevant words and less attention is paid to words that lose their relevance in the context of the entire document.</p> + </li> +</ul> + +<h2 id="notes">Notes</h2> + +<ul> + <li> + <p>It would be interesting to put some mechanism in place to determine the number of episodes that should be generated before an answer is predicted. A naive way would be to predict the answer after each episode and check if the softmax score of the predicted answer is more than a threshold.</p> + </li> + <li> + <p>Alternatively, the softmax score and other information could be fed to a Reinforcement Learning (RL) agent which decided if the document should be read again. So every time an episode is generated, the state is passed to the RL agent which decides if another iteration should be performed. If it decides to predict the answer and correct answer is generated, the agent gets a large +ve reward else a large -ve reward.</p> + </li> + <li> + <p>To discourage unnecessary iterations, a small -ve reward could be given everytime the agent decides to perform another iteration.</p> + </li> +</ul> + + + + + One Model To Learn Them All + + 2017-07-01T00:00:00-04:00 + /site/2017/07/01/One Model To Learn Them All + <ul> + <li> + <p>The current trend in deep learning is to design, train and fine tune a separate model for each problem.</p> + </li> + <li> + <p>Though multi-task models have been explored, they have been trained for problems from the same domain only and no competitive multi-task, multi-modal models have been proposed.</p> + </li> + <li> + <p>The paper explores the possibility of such a unified deep learning model that can solve different tasks across multiple domains by training concurrently on them.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1706.05137">Link to the paper</a></p> + </li> +</ul> + +<h2 id="design-philosophy">Design Philosophy</h2> + +<ul> + <li> + <p>Small, modality-specific subnetworks (called modality nets) should be used to map input data to a joint representation space and back.</p> + + <ul> + <li> + <p>The joint representation is to be of variable size.</p> + </li> + <li> + <p>Different tasks from the same domain share the modality net.</p> + </li> + </ul> + </li> + <li> + <p>MultiModel networks should use computational blocks from different domains even if they are not specifically designed for the task at hand.</p> + + <ul> + <li>Eg the paper reports that attention and mixture-of-experts (MOE) layers slightly improve the performance on ImageNet even though they are not explicitly needed.</li> + </ul> + </li> +</ul> + +<h2 id="architecture">Architecture</h2> + +<ul> + <li> + <p>MulitModel Network consists of few, small modality nets, an encoder, I/O mixer and an autoregressive decoder.</p> + </li> + <li> + <p>Encoder and decoder use the following computational blocks:</p> + + <ul> + <li> + <p><strong>Convolutional Block</strong></p> + + <ul> + <li>ReLU activations on inputs followed by depthwise separable convolutions and layer normalization.</li> + </ul> + </li> + <li> + <p><strong>Attention Block</strong></p> + + <ul> + <li>Multihead, dot product based attention mechanism.</li> + </ul> + </li> + <li> + <p><strong>Mixture-of-Experts (MoE) Block</strong></p> + + <ul> + <li>Consists of simple feed-forward networks (called experts) and a trainable gating network which selects a sparse combination of experts to process the inputs.</li> + </ul> + </li> + <li> + <p>For further details, refer the <a href="https://arxiv.org/abs/1706.05137">original paper</a>.</p> + </li> + </ul> + </li> + <li> + <p><strong>Encoder</strong> consists of 6 conv blocks with a MoE block in the middle.</p> + </li> + <li> + <p><strong>I/O mixer</strong> consists of an attention block and 2 conv blocks.</p> + </li> + <li> + <p><strong>Decoder</strong> consists of 4 blocks of convolution and attention with a MoE block in the middle.</p> + </li> + <li> + <p><strong>Modality Nets</strong></p> + + <ul> + <li> + <p><strong>Language Data</strong></p> + + <ul> + <li> + <p>Input is the sequence of tokens ending in a termination token.</p> + </li> + <li> + <p>This sequence is mapped to correct dimensionality using a learned embedding.</p> + </li> + <li> + <p>For output, the network takes the decoded output and performs a learned linear mapping followed by Softmax.</p> + </li> + </ul> + </li> + <li> + <p><strong>Image</strong> and <strong>Categorical Data</strong></p> + + <ul> + <li> + <p>Uses residual convolution blocks.</p> + </li> + <li> + <p>Similar to the exit flow for <a href="https://arxiv.org/abs/1610.02357">Xception Network</a></p> + </li> + </ul> + </li> + <li> + <p><strong>Audio Data</strong></p> + + <ul> + <li>1-d waveform over time or 2-d spectrogram operated upon by stack of 8 residual convolution blocks.</li> + </ul> + </li> + </ul> + </li> +</ul> + +<h2 id="tasks">Tasks</h2> + +<ul> + <li> + <p>WSJ speech corpus</p> + </li> + <li> + <p>ImageNet dataset</p> + </li> + <li> + <p>COCO image captioning dataset</p> + </li> + <li> + <p>WSJ parsing dataset</p> + </li> + <li> + <p>WMT English-German translation corpus</p> + </li> + <li> + <p>German-English translation</p> + </li> + <li> + <p>WMT English-French translation corpus</p> + </li> + <li> + <p>German-French translation</p> + </li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li> + <p>The experimental section is not very rigorous with many details skipped (would probably be added later).</p> + </li> + <li> + <p>While MultiModel does not beat the state of the art models, it does outperform some recent models.</p> + </li> + <li> + <p>Jointly trained model performs similar to single trained models on tasks with a lot of data and sometimes outperformed single trained models on tasks with less data (like parsing).</p> + </li> + <li> + <p>Interestingly, jointly training the model for parsing task and Imagenet tasks improves the performance of parsing task even though the two tasks are seemingly unrelated.</p> + </li> + <li> + <p>Another experiment was done to evaluate the effect of components (like MoE) on tasks (like Imagenet) which do not explicitly need them. It was observed that either the performance either went down or remained the same when MoE component was removed. This indicates that mixing different components does help to improve performance over multiple tasks.</p> + </li> + <li> + <p>But this observation is not conclusive as a different combination of say the encoder (that does not use MoE) could achieve better performance than one that does. The paper does not explore possibilities like these.</p> + </li> +</ul> + + + + + Two/Too Simple Adaptations of Word2Vec for Syntax Problems + + 2017-06-26T00:00:00-04:00 + /site/2017/06/26/Two-Too Simple Adaptations of Word2Vec for Syntax Problems + <ul> + <li>The paper proposes two variants of Word2Vec model so that it may account for syntactic properties of words and perform better on syntactic tasks like POS tagging and dependency parsing.</li> + <li><a href="http://www.cs.cmu.edu/~lingwang/papers/naacl2015.pdf">Link to the paper</a></li> + <li>In the original Skip-Gram setting, the model predicts the <em>2c</em> words in the context window (<em>c</em> is the size of the context window). But it uses the same set of parameters whether predicting the word next to the centre word or the word farthest away, thus losing all information about the word order.</li> + <li>Similarly, the CBOW (Continuous Bas Of Words) model just adds the embedding of all the surrounding words thereby losing the word order information.</li> + <li>The paper proposes to use a set of <em>2c</em> matrices each for a different word in the context window for both Skip-Gram and CBOW models.</li> + <li>This simple trick allows for accounting of syntactic properties in the word vectors and improves the performance of dependency parsing task and POS tagging.</li> + <li>The downside of using this is that now the model has far more parameters than before which increases the training time and needs a large enough corpus to avoid sparse representation.</li> +</ul> + + + + + A Decomposable Attention Model for Natural Language Inference + + 2017-06-17T00:00:00-04:00 + /site/2017/06/17/A Decomposable Attention Model for Natural Language Inference + <h3 id="introduction">Introduction</h3> + +<ul> + <li>The paper proposes an attention based mechanism to decompose the problem of Natural Language Inference (NLI) into parallelizable subproblems.</li> + <li>Further, it uses much fewer parameters as compared to any other model while obtaining state of the art results.</li> + <li><a href="https://arxiv.org/abs/1606.01933">Link to the paper</a></li> + <li>The motivation behind the paper is that the tasks like NLI do not require deep modelling of the sentence structure and comparison of local text substructures followed by aggregation can also work very well</li> +</ul> + +<h3 id="approach">Approach</h3> + +<ul> + <li> + <p>Given two sentences <strong>a</strong> and <strong>b</strong>, the model has to predict whether they have an “entailment” relationship, “neutral” relationship or “contradiction” relationship.</p> + </li> + <li><strong>Embed</strong> + <ul> + <li>All the words are mapped to their corresponding word vector representation. In subsequent steps, “word” refers to the word vector representation of the actual word.</li> + </ul> + </li> + <li><strong>Attend</strong> + <ul> + <li>For each word <em>i</em> in <strong>a</strong> and <em>j</em> in <strong>b</strong>, obtain unnormalized attention weights *e(i, j)=F(i)<sup>T</sup>F(j) where F is a feed-forward neural network.</li> + <li>For <em>i</em>, compute a β<sub>i</sub> by performing softmax-like normalization of <em>j</em> using <em>e(i, j)</em> as the weight and normalizing for all words <em>j</em> in <strong>b</strong>.</li> + <li>β<sub>i</sub> captures the subphrase in <strong>b</strong> that is softly aligned to <em>a</em>.</li> + <li>Similarly compute α<sub>j</sub> for <em>j</em>.</li> + </ul> + </li> + <li><strong>Compare</strong> + <ul> + <li>Create two set of comparison vectors, one for <strong>a</strong> and another for <strong>b</strong></li> + <li>For <strong>a</strong>, <strong>v<sub>1, i</sub></strong> = G(concatenate(i, β<sub>i</sub>)).</li> + <li>Similarly for <strong>b</strong>, <strong>v<sub>2, j</sub></strong> = G(concatenate(j, α<sub>j</sub>))</li> + <li>G is another feed-forward neural network.</li> + </ul> + </li> + <li><strong>Aggregate</strong> + <ul> + <li>Aggregate over the two set of comparison vectors to obtain <strong>v<sub>1</sub></strong> and <strong>v<sub>2</sub></strong>.</li> + <li>Feed the aggregated results through the final classifier layer.</li> + <li>Multi-class cross-entropy loss function.</li> + </ul> + </li> + <li>The paper also explains how this representation can be augmented using intra-sentence attention to the model compositional relationship between words.</li> +</ul> + +<h3 id="computational-complexity">Computational Complexity</h3> + +<ul> + <li>Computationally, the proposed model is asymptotically as good as LSTM with attention.</li> + <li>Assuming that dimensionality of word vectors &gt; length of the sentence (reasonable for the given SNLI dataset), the model is asymptotically as good as regular LSTM.</li> + <li>Further, the model has the advantage of being parallelizable.</li> +</ul> + +<h3 id="experiment">Experiment</h3> + +<ul> + <li>On Stanford Natural Language Inference (SNLI) dataset, the proposed model achieves the state of the art results even when it uses an order of magnitude lesser parameters than the next best model.</li> + <li>Adding intra-sentence attention further improve the test accuracy by 0.5 percent.</li> +</ul> + +<h3 id="notes">Notes</h3> + +<ul> + <li>A similar approach could be tried on paraphrase detection problem as even that problem should not require very deep sentence representation. <a href="https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs">Quora Duplicate Question Detection Challenege</a> would have been an ideal dataset but it has a lot of out-of-vocabulary information related to named entities which need to be accounted for.</li> +</ul> + + + + + A Fast and Accurate Dependency Parser using Neural Networks + + 2017-06-03T00:00:00-04:00 + /site/2017/06/03/A Fast and Accurate Dependency Parser using Neural Networks + <h2 id="introduction">Introduction</h2> +<ul> + <li>The paper proposes a neural network classifier to perform transition-based dependency parsing using dense vector representation for the features.</li> + <li>Earlier approaches used a large, manually designed sparse feature vector which took a lot of time and effort to compute and was often incomplete.</li> + <li><a href="http://cs.stanford.edu/people/danqi/papers/emnlp2014.pdf">Link to the paper</a></li> +</ul> + +<h2 id="description-of-the-system">Description of the system</h2> + +<ul> + <li>The system described in the paper uses <a href="http://www.mitpressjournals.org/doi/pdf/10.1162/coli.07-056-R1-07-027"><strong>arc-standard</strong> system</a> (a greedy, transition-based dependency parsing system).</li> + <li>Words, POS tags and arc labels are represented as d dimensional vectors.</li> + <li>S<sup>w</sup>, S<sup>t</sup>, S<sup>l</sup> denote the set of words, POS and labels respectively.</li> + <li>Neural network takes as input selected words from the 3 sets and uses a single hidden layer followed by Softmax which models the different actions that can be chosen by the arc-standard system.</li> + <li>Uses a cube activation function to allow interaction between features coming from the set of words, POS and labels in the first layer itself. These features come from different embeddings and are not related as such.</li> + <li>Using separate embedding for POS tags and labels allow for capturing aspects like NN (singular noun) should be closer to NNS (plural noun) than DT (determiner).</li> + <li>Input to the network contains words on the stack and buffer and their left and right children (read upon transition-based parsing), their labels and corresponding arc labels.</li> + <li>Output generated by the system is the action to be taken (transition to be performed) when reading each word in the input.</li> + <li>This sequential and deterministic nature of the input-output mapping allows the problem to be modelled as a supervised learning problem and a cross entropy loss can be used.</li> + <li>L2-regularization term is also added to the loss.</li> + <li>During inference, a greedy decoding strategy is used and transition with the highest score is chosen.</li> + <li>The paper mentions a pre-computation trick where matrix computation of most frequent top 10000 words is performed beforehand and cached.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li>Dataset + <ul> + <li>English Penn Treebank (PTB)</li> + <li>Chinese Penn Treebank (CTB)</li> + </ul> + </li> + <li>Two dependency representations used: + <ul> + <li>CoNLL Syntactic Dependencies (CD)</li> + <li>Stanford Basic Dependencies (SD)</li> + </ul> + </li> + <li>Metrics: + <ul> + <li>Unlabeled Attached Scores (UAS)</li> + <li>Labeled Attached Scores (LAS)</li> + </ul> + </li> + <li>Benchmarked against: + <ul> + <li>Greedy arc-eager parser</li> + <li>Greedy arc-standard parser</li> + <li>Malt-Parser</li> + <li>MSTParser</li> + </ul> + </li> + <li>Results + <ul> + <li>The system proposed in the paper outperforms all other parsers in both speed and accuracy.</li> + </ul> + </li> +</ul> + +<h2 id="analysis">Analysis</h2> + +<ul> + <li>Cube function gives a 0.8-1.2% improvement over tanh.</li> + <li>Pretained embeddings give 0.7-1.7% improvement over training embeddings from scratch.</li> + <li>Using POS and labels gives an improvement of 1.7% and 0.4% respectively.</li> +</ul> + + + + + Neural Module Networks + + 2017-05-23T00:00:00-04:00 + /site/2017/05/23/Neural Module Networks + <h2 id="introduction">Introduction</h2> + +<ul> + <li>For the task of <a href="https://shagunsodhani.in/papers-I-read/VQA-Visual-Question-Answering">Visual Question Answering</a>, decompose a question into its linguistic substructures and train a neural network module for each substructure.</li> + <li>Jointly train the modules and dynamically compose them into deep networks which can learn to answer the question.</li> + <li>Start by analyzing the question and decide what logical units are needed to answer the question and what should be the relationship between them.</li> + <li>The paper also introduces a new dataset for Visual Question Answering which has challenging, highly compositional questions about abstract shapes.</li> + <li><a href="https://arxiv.org/abs/1511.02799">Link to the paper</a></li> +</ul> + +<h2 id="inspiration">Inspiration</h2> + +<ul> + <li>Questions tend to be compositional.</li> + <li>Different architectures are needed for different tasks - CNNs for object detection, RNNs for counting.</li> + <li>Recurrent and Recursive Neural Networks also use the idea of a different network graph for each input.</li> +</ul> + +<h2 id="neural-module-network-for-vqa">Neural Module Network for VQA</h2> + +<ul> + <li>Training samples of form <em>(w, x, y)</em> + <ul> + <li><em>w</em> - Natural Language Question</li> + <li><em>x</em> - Images</li> + <li><em>y</em> - Answer</li> + </ul> + </li> + <li>Model specified by collection of modules <em>{m}</em> and a network layout predictor <em>P</em>.</li> + <li>Model instantiates a network based on <em>P(w)</em> and uses that to encode a distribution <em>P(y|w, x, model_params)</em></li> +</ul> + +<h2 id="modules">Modules</h2> + +<ul> + <li>Find: Finds objects of interest.</li> + <li>Transform: Shift regions of attention.</li> + <li>Combine: Merge two attention maps into a single one.</li> + <li>Describe: Map a pair of attention and input image to a distribution over the labels.</li> + <li>Measure: Map attention to a distribution over the labels.</li> +</ul> + +<h2 id="natural-language-question-to-networks">Natural Language Question to Networks</h2> + +<ul> + <li>Map question to the layout which specifies the set of modules and connections between them.</li> + <li>Assemble the final network using the layout.</li> + <li>Parse the input question to obtain set of dependencies and obtain a representation similar to combinatory logic.</li> + <li>eg “what is the colour of the truck?” becomes “colour(truck)”</li> + <li>The symbolic representation is mapped to a layout: + <ul> + <li>All leaves become <em>find</em> module.</li> + <li>All internal nodes become <em>transform/combine</em> module.</li> + <li>All root nodes become <em>describe/measure</em> module.</li> + </ul> + </li> +</ul> + +<h2 id="answering-natural-language-question">Answering Natural Language Question</h2> + +<ul> + <li>Final model combines output from a simple LSTM question encoder with the output of the neural module network.</li> + <li>This helps in modelling the syntactic and semantic regularities of the question.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<ul> + <li>Since some modules are updated more frequently than others, adaptive per weight learning rates are better.</li> + <li>The paper introduces a small SHAPES datasets (64 images and 244 unique questions per image).</li> + <li>Neural Module Network achieves a score of 90% on SHAPES dataset while VIS + LSTM baseline achieves an accuracy of 65.3%.</li> + <li>Even on natural images (VQA dataset), the neural module network outperforms the VIS + LSTM baseline.</li> +</ul> + + + + + + Making the V in VQA Matter - Elevating the Role of Image Understanding in Visual Question Answering + + 2017-05-14T00:00:00-04:00 + /site/2017/05/14/Making the V in VQA Matter - Elevating the Role of Image Understanding in Visual Question Answering + <h3 id="problem-statement">Problem Statement</h3> + +<ul> + <li>Standard VQA models benefit from the inherent bias in the structure of the world and the language of the question.</li> + <li>For example, if the question starts with “Do you see a …”, it is more likely to be “yes” than “no”.</li> + <li>To truly assess the capability of any VQA system, we need to have evaluation tasks that require the use of both the visual and the language modality.</li> + <li>The authors present a balanced version of <a href="https://shagunsodhani.in/papers-I-read/VQA-Visual-Question-Answering">VQA dataset</a> where each question in the dataset is associated with a pair of similar images such that the same question would give different answers on the two images.</li> + <li>The proposed data collection procedure enables the authors to develop a novel interpretable model which, given an image and a question, identifies an image that is similar to the original image but has a different answer to the same question thereby building trust for the system.</li> + <li><a href="https://arxiv.org/abs/1612.00837">Link to the paper</a></li> +</ul> + +<h3 id="dataset-collection">Dataset Collection</h3> + +<ul> + <li>Given an (image, question, answer) triplet (I, Q, A) from the VQA dataset, a human worker (on AMT) is asked to identify an image I’ which is similar to I but for which the answer to question Q is A’ (different from A).</li> + <li>To facilitate the search for I’, the worker is shown 24 nearest-neighbor images of I (based on VGGNet features) and is asked to choose the most similar image to I, for which Q makes sense and answer for Q is different than A. In case none of the 24 images qualifies, the worker may select “not possible”.</li> + <li>In the second round, the workers were asked to answer Q for I’.</li> + <li>This 2-stage protocol results in a significantly more balanced dataset than the previous dataset.</li> +</ul> + +<h3 id="observation">Observation</h3> + +<ul> + <li>State-of-the-art models trained on unbalanced VQA dataset perform significantly worse on the new, balanced dataset indicating that those models benefitted from the language bias in the older dataset.</li> + <li>Training on balanced dataset improves performance on the unbalanced dataset.</li> + <li>Further, the VQA model, trained on the balanced dataset, learns to differentiate between otherwise similar images.</li> +</ul> + +<h3 id="counter-example-explanations">Counter-example Explanations</h3> + +<ul> + <li>Given an image and a question, the model not only answers the question, it also provides an image (from the k nearest neighbours of I, based on VGGNet features) which is similar to the input image but for which the model would have given different answer for the same image.</li> + <li>Supervising signal is provided by the data collection procedure where humans pick the image I’ from the same set of candidate images.</li> + <li>For each image in the candidate set, compute the inner product of question-image embedding and answer embedding.</li> + <li>The K inner product values are passed through a fully connected layer to generate K scores.</li> + <li>Trained with pairwise hinge ranking loss so that the score of the human picked image is higher than the score of all other images by a margin of M (hyperparameter).</li> + <li>The proposed explanation model achieves a recall@5 of 43.49%</li> +</ul> + + + + + Conditional Similarity Networks + + 2017-05-07T00:00:00-04:00 + /site/2017/05/07/Conditional Similarity Networks + <h2 id="problem-statement">Problem Statement</h2> + +<ul> + <li>A common way of measuring image similarity is to embed them into feature spaces where distance acts as a proxy for similarity.</li> + <li>But this feature space can capture one (or a weighted combination) of the many possible notions of similarity.</li> + <li>What if contracting notions of similarity could be captured at the same time - in terms of semantically distinct subspaces.</li> + <li>The paper proposes a new architecture called as Conditional Similarity Networks (CSNs) which learns a disentangled embedding such that the features, for different notions of similarity, are encoded into separate dimensions.</li> + <li>It jointly learns masks (or feature extractors) that select and reweights relevant dimensions to induce a subspace that encodes a specific notion of similarity.</li> + <li><a href="https://vision.cornell.edu/se3/conditional-similarity-networks/">Link to the paper</a></li> +</ul> + +<h2 id="conditional-similarity-networks">Conditional Similarity Networks</h2> + +<ul> + <li>Given an image, <em>x</em>, learn a non-linear feature embedding <em>f(x)</em> such that for any 2 images <em>x<sub>1</sub></em> and <em>x<sub>2</sub></em>, the euclidean distance between <em>f(x<sub>1</sub>)</em> and <em>f(x<sub>2</sub>)</em> reflects their similarity.</li> +</ul> + +<h3 id="conditional-similarity-triplets">Conditional Similarity Triplets</h3> + +<ul> + <li>Given a triplet of images <em>(x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>)</em> and a condition <em>c</em> (the notion of similarity), an oracle (say crowd) is used to determmine if <em>x<sub>1</sub></em> is more similar to <em>x<sub>2</sub></em> or <em>x<sub>3</sub></em> as per the given criteria <em>c</em>.</li> + <li>In general, for images <em>i, j, l</em>, the triplet <em>t</em> is ordered {i, j, l | c} if <em>i</em> is more similar to <em>j</em> than <em>l</em>.</li> +</ul> + +<h3 id="learning-from-triplets">Learning From Triplets</h3> + +<ul> + <li>Define a loss function <em>L<sub>T</sub>()</em> to model the similarity structure over the triplets.</li> + <li><em>L<sub>T</sub>(i, j, l) = max{0, D(i, j) - D(i, l) + h}</em> where <em>D</em> is the euclidean distance function and <em>h</em> is the similarity scalar margin to prevent trivial solutions.</li> + <li>To model conditional similarities, masks <em>m</em> are defined as <em>m = σ(β)</em> where σ is the RELU unit and β is a set of parameters to be learnt.</li> + <li><em>m<sub>c</sub></em> denotes the selection of the c-th mask column from feature vector. It thus acts as an element-wise gating function which selects the relevant dimensions of the embedding to attend to a particular similarity concept.</li> + <li>The euclidean function <em>D</em> now computes the masked distance (<em>f(i, c)m<sub>c</sub></em>) between the two given images.</li> + <li>Two regularising terms are also added - L2 norm for <em>D</em> and L1 norm for <em>m</em>.</li> +</ul> + +<h2 id="experiments">Experiments</h2> + +<h3 id="datasets">Datasets</h3> + +<ul> + <li>Fonts dataset by Bernhardsson + <ul> + <li>3.1 million 64 by 64-pixel grey scale images.</li> + </ul> + </li> + <li>Zappos50k shoe dataset + <ul> + <li>Contains 50,000 images of individual richly annotated shoes.</li> + <li>Characteristics of interest: + <ul> + <li>Type of the shoes (i.e., shoes, boots, sandals or slippers)</li> + <li>Suggested gender of the shoes (i.e., for women, men, girls or boys)</li> + <li>Height of the shoes’ heels (0 to 5 inches)</li> + <li>Closing mechanism of the shoes (buckle, pull on, slip on, hook and loop or laced up)</li> + </ul> + </li> + </ul> + </li> +</ul> + +<h3 id="models">Models</h3> + +<ul> + <li>Initial model for the experiments is a ConvNet pre-trained on ImageNet</li> + <li><strong>Standard Triplet Network</strong> + <ul> + <li>Learn from all available triplets jointly as if they have the same notion of similarity.</li> + </ul> + </li> + <li><strong>Set of Task Specific Triplet Networks</strong> + <ul> + <li>Train n separate triplet networks such that each is trained on a single notion of similarity.</li> + <li>Needs far more parameters and compute.</li> + </ul> + </li> + <li><strong>Conditional Similarity Networks - fixed disjoint masks</strong> + <ul> + <li>In this version, only the convolutional filters and the embedding is learnt and masks are predefined to be disjoint.</li> + <li>Aims to learn a fully disjoint embedding.</li> + </ul> + </li> + <li><strong>Conditional Similarity Networks - learned masks</strong> + <ul> + <li>Learns all the components - conv filters, embedding and the masks.</li> + </ul> + </li> + <li>Refer paper for details on hyperparameters.</li> +</ul> + +<h2 id="results">Results</h2> + +<ul> + <li>Visual exploration of the learned subspaces (t-sne visualisation) show that network successfully disentangles different features in the embedded vector space.</li> + <li>The learned masks are very sparse and share dimensions. This shows that CSNs may learn to only use the required number of dimensions thereby doing away with the need of picking the right size of embedding.</li> + <li>Order of performance: + <ul> + <li>CSNs with learned masks &gt; CSNs with fixed masks &gt; Task-specific networks &gt; standard triplet network.</li> + <li>Though CSNs with learned masks require more training data.</li> + </ul> + </li> + <li>CSNs also outperform Standard Triplet Network when used as off the shelf features for (brand) classification task and is very close to the performance of ResNet trained on ImageNet.</li> + <li>This shows that while CSN retained most of the information in the original network, the training mechanism of Standard Triplet Network hurts the underlying conv features and their generalising capability</li> +</ul> + + + + + Simple Baseline for Visual Question Answering + + 2017-04-28T00:00:00-04:00 + /site/2017/04/28/Simple Baseline for Visual Question Answering + <h3 id="problem-statement">Problem Statement</h3> + +<ul> + <li>VQA Task: Given an image and a free-form, open-ended, natural language question (about the image), produce the answer for the image.</li> + <li>The paper attempts to fine tune the simple baseline method of Bag-of-Words + Image features (iBOWIMG) to make it competitive against more sophisticated LSTM models.</li> + <li><a href="http://arxiv.org/pdf/1512.02167.pdf">Link to the paper</a></li> +</ul> + +<h3 id="model">Model</h3> + +<ul> + <li>VQA modelled as a classification task where the system learns to choose among one of the top k most prominent answers.</li> + <li><strong>Text Features</strong> - Convert input question to a one-hot vector and then transform to word vectors using a word embedding.</li> + <li><strong>Image Features</strong> - Last layer activations from GoogLeNet.</li> + <li>Text features are concatenated with image features and fed into a softmax.</li> + <li>Different learning rates and weight clipping for word embedding layer and softmax layer with the learning rate for embedding layer much higher than that of softmax layer.</li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li>iBOWIMG model reports an accuracy of 55.89% for Open-ended questions and 61.97% for Multiple-Choice questions which is comparable to the performance of other, more sophisticated models.</li> +</ul> + +<h3 id="interpretation-of-the-model">Interpretation of the model</h3> + +<ul> + <li>Since the model is very simple, it is possible to interpret the model to know what exactly is the model learning. This is the greatest strength of the paper even though the model is very simple and naive.</li> + <li>The model attempts to memorise the correlation between the answer class and the informative words (in the question) and image features.</li> + <li>Question words generally can influence the answer given the bias in images occurring in COCO dataset.</li> + <li>Given the simple linear transformation being used, it is possible to quantify the importance of each single words (in the question) to the answer.</li> + <li>The paper uses the Class Activation Mapping (CAM) approach (which uses the linear relation between softmax and final image feature map) to highlight the informative image regions relevant to the predicted answer.</li> + <li>While the results reported by the paper are not themselves so significant, the described approach provides a way to interpret the strengths and weakness of different VQA datasets.</li> +</ul> + + + + + VQA-Visual Question Answering + + 2017-04-27T00:00:00-04:00 + /site/2017/04/27/VQA Visual Question Answering + <h3 id="problem-statement">Problem Statement</h3> + +<ul> + <li> + <p>Given an image and a free-form, open-ended, natural language question (about the image), produce the answer for the image.</p> + </li> + <li> + <p><a href="https://arxiv.org/abs/1505.00468v6">Link to the paper</a></p> + </li> +</ul> + +<h3 id="vqa-challenge-and-workshop"><a href="http://www.visualqa.org/">VQA Challenge and Workshop</a></h3> + +<ul> + <li>The authors organise an annual challenge and workshop to discuss the state-of-the-art methods and best practices in this domain.</li> + <li>Interestingly, the second version is starting on 27th April 2017 (today).</li> +</ul> + +<h3 id="benefits-over-tasks-like-image-captioning">Benefits over tasks like image captioning:</h3> + +<ul> + <li>Simple, <em>n-gram</em> statistics based methods are not sufficient.</li> + <li>Requires the system to blend in different aspects of knowledge - object detection, activity recognition, commonsense reasoning etc.</li> + <li>Since only short answers are expected, evaluation is easier.</li> +</ul> + +<h3 id="dataset">Dataset</h3> + +<ul> + <li>Created a new dataset of 50000 realistic, abstract images.</li> + <li>Used AMT to crowdsource the task of collecting questions and answers for MS COCO dataset (&gt;200K images) and abstract images.</li> + <li>Three questions per image and ten answers per question (along with their confidence) were collected.</li> + <li>The entire dataset contains over 760K questions and 10M answers.</li> + <li>The authors also performed an exhaustive analysis of the dataset to establish its diversity and to explore how the content of these question-answers differ from that of standard image captioning datasets.</li> +</ul> + +<h3 id="highlights-of-data-collection-methodology">Highlights of data collection methodology</h3> + +<ul> + <li>Emphasis on questions that require an image, and not just common sense, to be answered correctly.</li> + <li>Workers were shown previous questions when writing new questions to increase diversity.</li> + <li>Answers collected from multiple users to account for discrepancies in answers by humans.</li> + <li>Two modalities supported: + <ul> + <li><strong>Open-ended</strong> - produce the answer</li> + <li><strong>multiple-choice</strong> - select from a set of options provided (18 options comprising of popular, plausible, random and ofc correct answer)</li> + </ul> + </li> +</ul> + +<h3 id="highlights-from-data-analysis">Highlights from data analysis</h3> + +<ul> + <li>Most questions range from four to ten words while answers range from one to three words.</li> + <li>Around 40% questions are “yes/no” questions.</li> + <li>Significant (&gt;80%) inter-human agreement for answers.</li> + <li>The authors performed a study where human evaluators were asked to answer the questions without looking at the images.</li> + <li>Further, they performed a study where evaluators were asked to label if a question could be answered using common sense and what was the youngest age group, they felt, could answer the question.</li> + <li>The idea was to establish that a sufficient number of questions in the dataset required more than just common sense to answer.</li> +</ul> + +<h3 id="baseline-models">Baseline Models</h3> + +<ul> + <li><strong>random</strong> selection</li> + <li><strong>prior (“yes”)</strong> - always answer as yes.</li> + <li><strong>per Q-type prior</strong> - pick the most popular answer per question type.</li> + <li><strong>nearest neighbor</strong> - find the k nearest neighbors for the given (image, question) pair.</li> +</ul> + +<h3 id="methods">Methods</h3> + +<ul> + <li> + <p>2-channel model (using vision and language models) followed by softmax over (K = 1000) most frequent answers.</p> + </li> + <li><strong>Image Channel</strong> + <ul> + <li><strong>I</strong> - Used last hidden layer of VGGNet to obtain 4096-dim image embedding.</li> + <li><strong>norm I</strong> - : l2 normalized version of <strong>I</strong>.</li> + </ul> + </li> + <li><strong>Question Channel</strong> + <ul> + <li><strong>BoW Q</strong> - Bag-of-Words representation for the questions using the top 1000 words plus the top 1- first, second and third words of the questions.</li> + <li><strong>LSTM Q</strong> - Each word is encoded into 300-dim vectors using fully connected + tanh non-linearity. These embeddings are fed to an LSTM to obtain 1024d-dim embedding.</li> + <li><strong>Deeper LSTM Q</strong> - Same as LSTM Q but uses two hidden layers to obtain 2048-dim embedding.</li> + </ul> + </li> + <li><strong>Multi-Layer Perceptron (MLP)</strong> - Combine image and question embeddings to obtain a single embedding. + <ul> + <li><strong>BoW Q + I</strong> method - concatenate BoW Q and I embeddings.</li> + <li><strong>LSTM Q + I, deeper LSTM Q + norm I</strong> methods - image embedding transformed to 1024-dim using a FC layer and tanh non-linearity followed by element-wise multiplication of image and question vectors.</li> + </ul> + </li> + <li>Pass combined embedding to an MLP - FC neural network with 2 hidden layers (1000 neurons and 0.5 dropout) with tanh, followed by softmax.</li> + <li>Cross-entropy loss with VGGNet parameters frozen.</li> +</ul> + +<h3 id="results">Results</h3> + +<ul> + <li>Deeper LSTM Q + norm I is the best model with 58.16% accuracy on open-ended dataset and 63.09% on multiple-choice but far behind the human evaluators (&gt;80% and &gt;90% respectively).</li> + <li>The best model performs well for answers involving common visual objects but performs poorly for answers involving counts.</li> + <li>Vision only model performs even worse than the model which always produces “yes” as the answer.</li> +</ul> + + + + + diff --git a/_site/site/index.html b/_site/site/index.html new file mode 100644 index 00000000..c842c537 --- /dev/null +++ b/_site/site/index.html @@ -0,0 +1,13 @@ +
+ +
+ + + diff --git a/_site/site/index.html.1 b/_site/site/index.html.1 new file mode 100755 index 00000000..cac41710 --- /dev/null +++ b/_site/site/index.html.1 @@ -0,0 +1,924 @@ +#!/usr/bin/env bash + +shopt -s extglob +set -o errtrace +set -o errexit + +rvm_install_initialize() +{ + DEFAULT_SOURCES=(github.com/rvm/rvm bitbucket.org/mpapis/rvm) + + BASH_MIN_VERSION="3.2.25" + if + [[ -n "${BASH_VERSION:-}" && + "$(\printf "%b" "${BASH_VERSION:-}\n${BASH_MIN_VERSION}\n" | LC_ALL=C \sort -t"." -k1,1n -k2,2n -k3,3n | \head -n1)" != "${BASH_MIN_VERSION}" + ]] + then + echo "BASH ${BASH_MIN_VERSION} required (you have $BASH_VERSION)" + exit 1 + fi + + export HOME PS4 + export rvm_trace_flag rvm_debug_flag rvm_user_install_flag rvm_ignore_rvmrc rvm_prefix rvm_path + + PS4="+ \${BASH_SOURCE##\${rvm_path:-}} : \${FUNCNAME[0]:+\${FUNCNAME[0]}()} \${LINENO} > " +} + +log() { printf "%b\n" "$*"; } +debug(){ [[ ${rvm_debug_flag:-0} -eq 0 ]] || printf "%b\n" "Running($#): $*"; } +fail() { log "\nERROR: $*\n" ; exit 1 ; } + +rvm_install_commands_setup() +{ + \which which >/dev/null 2>&1 || fail "Could not find 'which' command, make sure it's available first before continuing installation." + \which grep >/dev/null 2>&1 || fail "Could not find 'grep' command, make sure it's available first before continuing installation." + if + [[ -z "${rvm_tar_command:-}" ]] && builtin command -v gtar >/dev/null + then + rvm_tar_command=gtar + elif + ${rvm_tar_command:-tar} --help 2>&1 | GREP_OPTIONS="" \grep -- --strip-components >/dev/null + then + rvm_tar_command="${rvm_tar_command:-tar}" + else + case "$(uname)" in + (OpenBSD) + log "Trying to install GNU version of tar, might require sudo password" + if (( UID )) + then sudo pkg_add -z gtar-1 + else pkg_add -z gtar-1 + fi + rvm_tar_command=gtar + ;; + (Darwin|FreeBSD|DragonFly) # it's not possible to autodetect on OSX, the help/man does not mention all flags + rvm_tar_command=tar + ;; + (SunOS) + case "$(uname -r)" in + (5.10) + log "Trying to install GNU version of tar, might require sudo password" + if (( UID )) + then + if \which sudo >/dev/null 2>&1 + then sudo_10=sudo + elif \which /opt/csw/bin/sudo >/dev/null 2>&1 + then sudo_10=/opt/csw/bin/sudo + else fail "sudo is required but not found. You may install sudo from OpenCSW repository (https://www.opencsw.org/about)" + fi + pkginfo -q CSWpkgutil || $sudo_10 pkgadd -a $rvm_path/config/solaris/noask -d https://get.opencsw.org/now CSWpkgutil + sudo /opt/csw/bin/pkgutil -iy CSWgtar -t https://mirror.opencsw.org/opencsw/unstable + else + pkginfo -q CSWpkgutil || pkgadd -a $rvm_path/config/solaris/noask -d https://get.opencsw.org/now CSWpkgutil + /opt/csw/bin/pkgutil -iy CSWgtar -t https://mirror.opencsw.org/opencsw/unstable + fi + rvm_tar_command=/opt/csw/bin/gtar + ;; + (*) + rvm_tar_command=tar + ;; + esac + esac + builtin command -v ${rvm_tar_command:-gtar} >/dev/null || + fail "Could not find GNU compatible version of 'tar' command, make sure it's available first before continuing installation." + fi + if + [[ " ${rvm_tar_options:-} " != *" --no-same-owner "* ]] && + $rvm_tar_command --help 2>&1 | GREP_OPTIONS="" \grep -- --no-same-owner >/dev/null + then + rvm_tar_options="${rvm_tar_options:-}${rvm_tar_options:+ }--no-same-owner" + fi +} + +usage() +{ + printf "%b" " + +Usage + + rvm-installer [options] [action] + +Options + + [[--]version] + + The version or tag to install. Valid values are: + + latest - The latest tagged version. + latest-minor - The latest minor version of the current major version. + latest- - The latest minor version of version x. + latest-. - The latest patch version of version x.y. + .. - Major version x, minor version y and patch z. + + [--]branch + + The name of the branch from which RVM is installed. This option can be used + with the following formats for : + + / + + If account is wayneeseguin or mpapis, installs from one of the following: + + https://github.com/rvm/rvm/archive/master.tar.gz + https://bitbucket.org/mpapis/rvm/get/master.tar.gz + + Otherwise, installs from: + + https://github.com//rvm/archive/master.tar.gz + + / + + If account is wayneeseguin or mpapis, installs from one of the following: + + https://github.com/rvm/rvm/archive/.tar.gz + https://bitbucket.org/mpapis/rvm/get/.tar.gz + + Otherwise, installs from: + + https://github.com//rvm/archive/.tar.gz + + [/] + + Installs the branch from one of the following: + + https://github.com/rvm/rvm/archive/.tar.gz + https://bitbucket.org/mpapis/rvm/get/.tar.gz + + [--]source + + Defines the repository from which RVM is retrieved and installed in the format: + + // + + Where: + + - Is bitbucket.org, github.com or a github enterprise site serving + an RVM repository. + - Is the user account in which the RVM repository resides. + - Is the name of the RVM repository. + + Note that when using the [--]source option, one should only use the [/]branch format + with the [--]branch option. Failure to do so will result in undefined behavior. + + --trace + + Provides debug logging for the installation script. +Actions + + master - Installs RVM from the master branch at rvm/rvm on github or mpapis/rvm + on bitbucket.org. + stable - Installs RVM from the stable branch a rvm/rvm on github or mpapis/rvm + on bitbucket.org. + help - Displays this output. + +" +} + +## duplication marker 32fosjfjsznkjneuera48jae +__rvm_curl_output_control() +{ + if + (( ${rvm_quiet_curl_flag:-0} == 1 )) + then + __flags+=( "--silent" "--show-error" ) + elif + [[ " $*" == *" -s"* || " $*" == *" --silent"* ]] + then + # make sure --show-error is used with --silent + [[ " $*" == *" -S"* || " $*" == *" -sS"* || " $*" == *" --show-error"* ]] || + { + __flags+=( "--show-error" ) + } + fi +} + +## duplication marker 32fosjfjsznkjneuera48jae +# -S is automatically added to -s +__rvm_curl() +( + __rvm_which curl >/dev/null || + { + rvm_error "RVM requires 'curl'. Install 'curl' first and try again." + return 200 + } + + typeset -a __flags + __flags=( --fail --location --max-redirs 10 ) + + [[ "$*" == *"--max-time"* ]] || + [[ "$*" == *"--connect-timeout"* ]] || + __flags+=( --connect-timeout 30 --retry-delay 2 --retry 3 ) + + if [[ -n "${rvm_proxy:-}" ]] + then __flags+=( --proxy "${rvm_proxy:-}" ) + fi + + __rvm_curl_output_control + + unset curl + __rvm_debug_command \curl "${__flags[@]}" "$@" || return $? +) + +rvm_error() { printf "ERROR: %b\n" "$*"; } +__rvm_which(){ which "$@" || return $?; true; } +__rvm_debug_command() +{ + debug "Running($#): $*" + "$@" || return $? + true +} +rvm_is_a_shell_function() +{ + [[ -t 0 && -t 1 ]] || return $? + return ${rvm_is_not_a_shell_function:-0} +} + +# Searches the tags for the highest available version matching a given pattern. +# fetch_version (github.com/rvm/rvm bitbucket.org/mpapis/rvm) 1.10. -> 1.10.3 +# fetch_version (github.com/rvm/rvm bitbucket.org/mpapis/rvm) 1.10. -> 1.10.3 +# fetch_version (github.com/rvm/rvm bitbucket.org/mpapis/rvm) 1. -> 1.11.0 +# fetch_version (github.com/rvm/rvm bitbucket.org/mpapis/rvm) "" -> 2.0.1 +fetch_version() +{ + typeset _account _domain _pattern _repo _sources _values _version + _sources=(${!1}) + _pattern=$2 + for _source in "${_sources[@]}" + do + IFS='/' read -r _domain _account _repo <<< "${_source}" + _version="$( + fetch_versions ${_domain} ${_account} ${_repo} | + GREP_OPTIONS="" \grep "^${_pattern:-}" | tail -n 1 + )" + if + [[ -n ${_version} ]] + then + echo "${_version}" + return 0 + fi + done +} + +# Returns a sorted list of all version tags from a repository +fetch_versions() +{ + typeset _account _domain _repo _url + _domain=$1 + _account=$2 + _repo=$3 + case ${_domain} in + (bitbucket.org) + _url=https://${_domain}/api/1.0/repositories/${_account}/${_repo}/branches-tags + ;; + (github.com) + _url=https://api.${_domain}/repos/${_account}/${_repo}/tags + ;; + + (*) + _url=https://${_domain}/api/v3/repos/${_account}/${_repo}/tags + ;; + esac + __rvm_curl -s ${_url} | + \awk -v RS=',' -v FS='"' '$2=="name"{print $4}' | + sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n -k 5,5n +} + +install_release() +{ + typeset _source _sources _url _version _verify_pgp + _sources=(${!1}) + _version=$2 + debug "Downloading RVM version ${_version}" + for _source in "${_sources[@]}" + do + case ${_source} in + (bitbucket.org*) + _url="https://${_source}/get/${_version}.tar.gz" + _verify_pgp="https://${_source}/downloads/${_version}.tar.gz.asc" + ;; + (*) + _url="https://${_source}/archive/${_version}.tar.gz" + _verify_pgp="https://${_source}/releases/download/${_version}/${_version}.tar.gz.asc" + ;; + esac + get_and_unpack "${_url}" "rvm-${_version}.tgz" "$_verify_pgp" && return + done + return $? +} + +install_head() +{ + typeset _branch _source _sources _url + _sources=(${!1}) + _branch=$2 + debug "Selected RVM branch ${_branch}" + for _source in "${_sources[@]}" + do + case ${_source} in + (bitbucket.org*) + _url=https://${_source}/get/${_branch}.tar.gz + ;; + (*) + _url=https://${_source}/archive/${_branch}.tar.gz + ;; + esac + get_and_unpack "${_url}" "rvm-${_branch//\//_}.tgz" && return + done + return $? +} + +# duplication marker dfkjdjngdfjngjcszncv +# Drop in cd which _doesn't_ respect cdpath +__rvm_cd() +{ + typeset old_cdpath ret + ret=0 + old_cdpath="${CDPATH}" + CDPATH="." + chpwd_functions="" builtin cd "$@" || ret=$? + CDPATH="${old_cdpath}" + return $ret +} + +get_package() +{ + typeset _url _file + _url="$1" + _file="$2" + log "Downloading ${_url}" + __rvm_curl -sS ${_url} > ${rvm_archives_path}/${_file} || + { + _return=$? + case $_return in + # duplication marker lfdgzkngdkjvnfjknkjvcnbjkncvjxbn + (60) + log " +Could not download '${_url}', you can read more about it here: +https://rvm.io/support/fixing-broken-ssl-certificates/ +To continue in insecure mode run 'echo insecure >> ~/.curlrc'. +" + ;; + # duplication marker lfdgzkngdkjvnfjknkjvcnbjkncvjxbn + (77) + log " +It looks like you have old certificates, you can read more about it here: +https://rvm.io/support/fixing-broken-ssl-certificates/ +" + ;; + # duplication marker lfdgzkngdkjvnfjknkjvcnbjkncvjxbn + (141) + log " +Curl returned 141 - it is result of a segfault which means it's Curls fault. +Try again and if it crashes more than a couple of times you either need to +reinstall Curl or consult with your distribution manual and contact support. +" + ;; + (*) + log " +Could not download '${_url}'. + curl returned status '$_return'. +" + ;; + esac + return $_return + } +} + +# duplication marker flnglfdjkngjndkfjhsbdjgfghdsgfklgg +rvm_install_gpg_setup() +{ + export rvm_gpg_command + { + rvm_gpg_command="$( \which gpg2 2>/dev/null )" && + [[ ${rvm_gpg_command} != "/cygdrive/"* ]] + } || rvm_gpg_command="" + + debug "Detected GPG program: '$rvm_gpg_command'" + + [[ -n "$rvm_gpg_command" ]] || return $? +} + +# duplication marker rdjgndfnghdfnhgfdhbghdbfhgbfdhbn +verify_package_pgp() +{ + if + "${rvm_gpg_command}" --verify "$2" "$1" + then + log "GPG verified '$1'" + else + typeset _ret=$? + log "\ +Warning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found. \ +Assuming you trust Michal Papis import the mpapis public key (downloading the signatures). + +GPG signature verification failed for '$1' - '$3'! Try to install GPG v2 and then fetch the public key: + + ${SUDO_USER:+sudo }${rvm_gpg_command##*/} --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 + +or if it fails: + + command curl -sSL https://rvm.io/mpapis.asc | ${SUDO_USER:+sudo }${rvm_gpg_command##*/} --import - + +the key can be compared with: + + https://rvm.io/mpapis.asc + https://keybase.io/mpapis + +NOTE: GPG version 2.1.17 have a bug which cause failures during fetching keys from remote server. Please downgrade \ +or upgrade to newer version (if available) or use the second method described above. +" + exit $_ret + fi +} + +verify_pgp() +{ + [[ -n "${1:-}" ]] || + { + debug "No PGP url given, skipping." + return 0 + } + + get_package "$1" "$2.asc" || + { + debug "PGP url given but does not exist: '$1'" + return 0 + } + + rvm_install_gpg_setup || + { + log "Found PGP signature at: '$1', +but no GPG software exists to validate it, skipping." + return 0 + } + + verify_package_pgp "${rvm_archives_path}/$2" "${rvm_archives_path}/$2.asc" "$1" +} + +get_and_unpack() +{ + typeset _url _file _patern _return _verify_pgp + _url="$1" + _file="$2" + _verify_pgp="$3" + + get_package "$_url" "$_file" || return $? + verify_pgp "$_verify_pgp" "$_file" || return $? + + [[ -d "${rvm_src_path}/rvm" ]] || \mkdir -p "${rvm_src_path}/rvm" + __rvm_cd "${rvm_src_path}/rvm" || + { + _return=$? + log "Could not change directory '${rvm_src_path}/rvm'." + return $_return + } + + rm -rf ${rvm_src_path}/rvm/* + __rvm_debug_command $rvm_tar_command xzf ${rvm_archives_path}/${_file} ${rvm_tar_options:-} --strip-components 1 || + { + _return=$? + log "Could not extract RVM sources." + return $_return + } +} + +rvm_install_default_settings() +{ + # Tracing, if asked for. + if + [[ "$*" == *--trace* ]] || (( ${rvm_trace_flag:-0} > 0 )) + then + set -o xtrace + rvm_trace_flag=1 + fi + + # Variable initialization, remove trailing slashes if they exist on HOME + true \ + ${rvm_trace_flag:=0} ${rvm_debug_flag:=0}\ + ${rvm_ignore_rvmrc:=0} HOME="${HOME%%+(\/)}" + + if + (( rvm_ignore_rvmrc == 0 )) + then + for rvmrc in /etc/rvmrc "$HOME/.rvmrc" + do + if + [[ -s "$rvmrc" ]] + then + if + GREP_OPTIONS="" \grep '^\s*rvm .*$' "$rvmrc" >/dev/null 2>&1 + then + printf "%b" " + Error: $rvmrc is for rvm settings only. + rvm CLI may NOT be called from within $rvmrc. + Skipping the loading of $rvmrc + " + exit 1 + else + source "$rvmrc" + fi + fi + done + fi + + if + [[ -z "${rvm_path:-}" ]] + then + if + (( UID == 0 )) + then + rvm_user_install_flag=0 + rvm_prefix="/usr/local" + rvm_path="${rvm_prefix}/rvm" + else + rvm_user_install_flag=1 + rvm_prefix="$HOME" + rvm_path="${rvm_prefix}/.rvm" + fi + fi + if [[ -z "${rvm_prefix}" ]] + then rvm_prefix=$( dirname $rvm_path ) + fi + + # duplication marker kkdfkgnjfndgjkndfjkgnkfjdgn + [[ -n "${rvm_user_install_flag:-}" ]] || + case "$rvm_path" in + (/usr/local/rvm) rvm_user_install_flag=0 ;; + ($HOME/*|/${USER// /_}*) rvm_user_install_flag=1 ;; + (*) rvm_user_install_flag=0 ;; + esac +} + +rvm_install_parse_params() +{ + install_rubies=() + install_gems=() + flags=( ./scripts/install ) + forwarded_flags=() + while + (( $# > 0 )) + do + token="$1" + shift + case "$token" in + + (--trace) + set -o xtrace + rvm_trace_flag=1 + flags=( -x "${flags[@]}" "$token" ) + forwarded_flags+=( "$token" ) + ;; + + (--debug|--quiet-curl) + flags+=( "$token" ) + forwarded_flags+=( "$token" ) + token=${token#--} + token=${token//-/_} + export "rvm_${token}_flag"=1 + printf "%b" "Turning on ${token/_/ } mode.\n" + ;; + + (--path) + if [[ -n "${1:-}" ]] + then + rvm_path="$1" + shift + else + fail "--path must be followed by a path." + fi + ;; + + (--branch|branch) # Install RVM from a given branch + if [[ -n "${1:-}" ]] + then + case "$1" in + (/*) + branch=${1#/} + ;; + (*/) + branch=master + if [[ "${1%/}" -ne wayneeseguin ]] && [[ "${1%/}" -ne mpapis ]] + then sources=(github.com/${1%/}/rvm) + fi + ;; + (*/*) + branch=${1#*/} + if [[ "${1%%/*}" -ne wayneeseguin ]] && [[ "${1%%/*}" -ne mpapis ]] + then sources=(github.com/${1%%/*}/rvm) + fi + ;; + (*) + branch="$1" + ;; + esac + shift + else + fail "--branch must be followed by a branchname." + fi + ;; + + (--source|source) + if [[ -n "${1:-}" ]] + then + if [[ "$1" = */*/* ]] + then + sources=($1) + shift + else + fail "--source must be in the format //." + fi + else + fail "--source must be followed by a source." + fi + ;; + + (--user-install|--ignore-dotfiles) + token=${token#--} + token=${token//-/_} + export "rvm_${token}_flag"=1 + printf "%b" "Turning on ${token/_/ } mode.\n" + ;; + + (--auto-dotfiles) + flags+=( "$token" ) + export "rvm_auto_dotfiles_flag"=1 + printf "%b" "Turning on auto dotfiles mode.\n" + ;; + + (--auto) + export "rvm_auto_dotfiles_flag"=1 + printf "%b" "Warning, --auto is deprecated in favor of --auto-dotfiles.\n" + ;; + + (--verify-downloads) + if [[ -n "${1:-}" ]] + then + export rvm_verify_downloads_flag="$1" + forwarded_flags+=( "$token" "$1" ) + shift + else + fail "--verify-downloads must be followed by level(0|1|2)." + fi + ;; + + (--autolibs=*) + flags+=( "$token" ) + export rvm_autolibs_flag="${token#--autolibs=}" + forwarded_flags+=( "$token" ) + ;; + + (--without-gems=*|--with-gems=*|--with-default-gems=*) + flags+=( "$token" ) + value="${token#*=}" + token="${token%%=*}" + token="${token#--}" + token="${token//-/_}" + export "rvm_${token}"="${value}" + printf "%b" "Installing RVM ${token/_/ }: ${value}.\n" + ;; + + (--version|version) + version="$1" + shift + ;; + + (head|master) + version="head" + branch="master" + ;; + + (stable) + version="latest" + ;; + + (latest|latest-*|+([[:digit:]]).+([[:digit:]]).+([[:digit:]])) + version="$token" + ;; + + (--ruby) + install_rubies+=( ruby ) + ;; + + (--ruby=*) + token=${token#--ruby=} + install_rubies+=( ${token//,/ } ) + ;; + + (--rails) + install_gems+=( rails ) + ;; + + (--gems=*) + token=${token#--gems=} + install_gems+=( ${token//,/ } ) + ;; + + (--add-to-rvm-group) + export rvm_add_users_to_rvm_group="$1" + shift + ;; + + (help|usage) + usage + exit 0 + ;; + + (*) + usage + exit 1 + ;; + + esac + done + + if (( ${#install_gems[@]} > 0 && ${#install_rubies[@]} == 0 )) + then install_rubies=( ruby ) + fi + + true "${version:=head}" + true "${branch:=master}" + + if [[ -z "${sources[@]}" ]] + then sources=("${DEFAULT_SOURCES[@]}") + fi + + rvm_src_path="$rvm_path/src" + rvm_archives_path="$rvm_path/archives" + rvm_releases_url="https://rvm.io/releases" +} + +rvm_install_validate_rvm_path() +{ + case "$rvm_path" in + (*[[:space:]]*) + printf "%b" " +It looks you are one of the happy *space* users (in home dir name), +RVM is not yet fully ready for it, use this trick to fix it: + + sudo mkdir -p /${USER// /_}.rvm + sudo chown -R \"$USER:\" /${USER// /_}.rvm + echo \"export rvm_path=/${USER// /_}.rvm\" >> \"$HOME/.rvmrc\" + +and start installing again. + +" + exit 2 + ;; + (/usr/share/ruby-rvm) + printf "%b" " +It looks you are one of the happy Ubuntu users, +RVM packaged by Ubuntu is old and broken, +follow this link for details how to fix: + + https://stackoverflow.com/a/9056395/497756 + +" + [[ "${rvm_uses_broken_ubuntu_path:-no}" == "yes" ]] || exit 3 + ;; + esac + + if [[ "$rvm_path" != "/"* ]] + then fail "The rvm install path must be fully qualified. Tried $rvm_path" + fi +} + +rvm_install_validate_volume_mount_mode() +{ + \typeset path partition test_exec + + path=$rvm_path + + # Directory $rvm_path might not exists at this point so we need to traverse the tree upwards + while [[ -n "$path" ]] + do + if [[ -d $path ]] + then + partition=`df -P $path | awk 'END{print $1}'` + + test_exec=$(mktemp $path/rvm-exec-test.XXXXXX) + echo '#!/bin/sh' > "$test_exec" + chmod +x "$test_exec" + + if ! "$test_exec" + then + rm -f "$test_exec" + printf "%b" " +It looks that scripts located in ${path}, which would be RVM destination ${rvm_path}, +are not executable. One of the reasons might be that partition ${partition} holding this location +is mounted in *noexec* mode, which prevents RVM from working correctly. Please verify your setup +and re-mount partition ${partition} without the noexec option." + exit 2 + fi + + rm -f "$test_exec" + break + fi + + path=${path%/*} + done +} + +rvm_install_select_and_get_version() +{ + typeset _version_release + + for dir in "$rvm_src_path" "$rvm_archives_path" + do + [[ -d "$dir" ]] || mkdir -p "$dir" + done + + _version_release="${version}" + case "${version}" in + (head) + _version_release="${branch}" + install_head sources[@] ${branch:-master} || exit $? + ;; + + (latest) + install_release sources[@] $(fetch_version sources[@]) || exit $? + ;; + + (latest-minor) + version="$(\cat "$rvm_path/VERSION")" + install_release sources[@] $(fetch_version sources[@] ${version%.*}) || exit $? + ;; + + (latest-*) + install_release sources[@] $(fetch_version sources[@] ${version#latest-}) || exit $? + ;; + + (+([[:digit:]]).+([[:digit:]]).+([[:digit:]])) # x.y.z + install_release sources[@] ${version} || exit $? + ;; + + (*) + fail "Something went wrong, unrecognized version '$version'" + ;; + esac + echo "${_version_release}" > "$rvm_path/RELEASE" +} + +rvm_install_main() +{ + [[ -f ./scripts/install ]] || + { + log "'./scripts/install' can not be found for installation, something went wrong, it usally means your 'tar' is broken, please report it here: https://github.com/rvm/rvm/issues" + return 127 + } + + # required flag - path to install + flags+=( --path "$rvm_path" ) + \command bash "${flags[@]}" +} + +rvm_install_ruby_and_gems() +( + if + (( ${#install_rubies[@]} > 0 )) + then + source ${rvm_scripts_path:-${rvm_path}/scripts}/rvm + source ${rvm_scripts_path:-${rvm_path}/scripts}/version + __rvm_version + + for _ruby in ${install_rubies[@]} + do command rvm "${forwarded_flags[@]}" install ${_ruby} -j 2 + done + # set the first one as default, skip rest + for _ruby in ${install_rubies[@]} + do + rvm "${forwarded_flags[@]}" alias create default ${_ruby} + break + done + + for _gem in ${install_gems[@]} + do rvm "${forwarded_flags[@]}" all do gem install ${_gem} + done + + printf "%b" " + * To start using RVM you need to run \`source $rvm_path/scripts/rvm\` + in all your open shell windows, in rare cases you need to reopen all shell windows. +" + + if + [[ "${install_gems[*]}" == *"rails"* ]] + then + printf "%b" " + * To start using rails you need to run \`rails new \`. +" + fi + fi +) + +rvm_install() +{ + rvm_install_initialize + rvm_install_commands_setup + rvm_install_default_settings + rvm_install_parse_params "$@" + rvm_install_validate_rvm_path + rvm_install_validate_volume_mount_mode + rvm_install_select_and_get_version + rvm_install_main + rvm_install_ruby_and_gems +} + +rvm_install "$@" diff --git a/_site/site/public/apple-touch-icon-precomposed.png b/_site/site/public/apple-touch-icon-precomposed.png new file mode 100755 index 0000000000000000000000000000000000000000..6cb41a8e552c6150fe56b3e7dec516fb20093793 GIT binary patch literal 831 zcmeAS@N?(olHy`uVBq!ia0vp^GeDSw2}n*~u)PdONtU=qlmzFem6RtIr7}3C&;z9@5zNS2R@25`Asfh;AKnX+Na7u<97pKfq@h3WR^mA`88 z?UHTy&$mDSZ(}bLt#tR^R^N;HyMq~m6j*~49J!V_aH)7Sih$4rCd~;tYEVtKwv^Ph03*mqU=WF?`)+l_csf7$lbVq?eg8ZCriHZf0dPmchuUsuf3nV1IIRN=fxMh6GLk=k zZMu23Znx5OYaSz`-ky294+9(jy#Dy{%!v)95ooLJGzi`&*fMKaA}@awOyN0x{Ma$D%Ixgb7d1FHF+EN3U!RoRfdo7Gf15=v@yE0Bi`VYm+j?lDea(>@8y@Wa|LNPOtgI}ipL>obxb$mz>>h)i?~)gFJvr>RGNdW-kATMnuF%es z>gk&wbfKpkRv}<|f(1S}JwXE>EPm?sYzAfquV*D64qq{42j&O{Pgg&ebxsLQ0K4yS AE&u=k literal 0 HcmV?d00001 diff --git a/_site/site/public/css/lanyon.css b/_site/site/public/css/lanyon.css new file mode 100755 index 00000000..1d57108e --- /dev/null +++ b/_site/site/public/css/lanyon.css @@ -0,0 +1,563 @@ +/* + * ___ + * /\_ \ + * \//\ \ __ ___ __ __ ___ ___ + * \ \ \ /'__`\ /' _ `\/\ \/\ \ / __`\ /' _ `\ + * \_\ \_/\ \_\.\_/\ \/\ \ \ \_\ \/\ \_\ \/\ \/\ \ + * /\____\ \__/.\_\ \_\ \_\/`____ \ \____/\ \_\ \_\ + * \/____/\/__/\/_/\/_/\/_/`/___/> \/___/ \/_/\/_/ + * /\___/ + * \/__/ + * + * Designed, built, and released under MIT license by @mdo. Learn more at + * https://github.com/poole/lanyon. + */ + + +/* + * Contents + * + * Global resets + * Masthead + * Sidebar + * Slide effect + * Posts and pages + * Pagination + * Reverse layout + * Themes + */ + + +/* + * Global resets + * + * Update the foundational and global aspects of the page. + */ + +/* Prevent scroll on narrow devices */ +html, +body { + overflow-x: hidden; +} + +html { + font-family: "PT Serif", Georgia, "Times New Roman", serif; +} + +h1, h2, h3, h4, h5, h6 { + font-family: "PT Sans", Helvetica, Arial, sans-serif; + font-weight: 400; + color: #313131; + letter-spacing: -.025rem; +} + + +/* + * Wrapper + * + * The wrapper is used to position site content when the sidebar is toggled. We + * use an outter wrap to position the sidebar without interferring with the + * regular page content. + */ + +.wrap { + position: relative; + width: 100%; +} + + +/* + * Container + * + * Center the page content. + */ + +.container { + max-width: 28rem; +} +@media (min-width: 38em) { + .container { + max-width: 32rem; + } +} +@media (min-width: 56em) { + .container { + max-width: 38rem; + } +} + + +/* + * Masthead + * + * Super small header above the content for site name and short description. + */ + +.masthead { + padding-top: 1rem; + padding-bottom: 1rem; + margin-bottom: 3rem; + border-bottom: 1px solid #eee; +} +.masthead-title { + margin-top: 0; + margin-bottom: 0; + color: #505050; +} +.masthead-title a { + color: #505050; +} +.masthead-title small { + font-size: 75%; + font-weight: 400; + color: #c0c0c0; + letter-spacing: 0; +} + +@media (max-width: 48em) { + .masthead-title { + text-align: center; + } + .masthead-title small { + display: none; + } +} + + +/* + * Sidebar + * + * The sidebar is the drawer, the item we are toggling with our handy hamburger + * button in the corner of the page. + * + * This particular sidebar implementation was inspired by Chris Coyier's + * "Offcanvas Menu with CSS Target" article, and the checkbox variation from the + * comments by a reader. It modifies both implementations to continue using the + * checkbox (no change in URL means no polluted browser history), but this uses + * `position` for the menu to avoid some potential content reflow issues. + * + * Source: http://css-tricks.com/off-canvas-menu-with-css-target/#comment-207504 + */ + +/* Style and "hide" the sidebar */ +.sidebar { + position: fixed; + top: 0; + bottom: 0; + left: -14rem; + width: 14rem; + visibility: hidden; + overflow-y: auto; + font-family: "PT Sans", Helvetica, Arial, sans-serif; + font-size: .875rem; /* 15px */ + color: rgba(255,255,255,.6); + background-color: #202020; + -webkit-transition: all .3s ease-in-out; + transition: all .3s ease-in-out; +} +@media (min-width: 30em) { + .sidebar { + font-size: .75rem; /* 14px */ + } +} + +/* Sidebar content */ +.sidebar a { + font-weight: normal; + color: #fff; +} +.sidebar-item { + padding: 1rem; +} +.sidebar-item p:last-child { + margin-bottom: 0; +} + +/* Sidebar nav */ +.sidebar-nav { + border-bottom: 1px solid rgba(255,255,255,.1); +} +.sidebar-nav-item { + display: block; + padding: .5rem 1rem; + border-top: 1px solid rgba(255,255,255,.1); +} +.sidebar-nav-item.active, +a.sidebar-nav-item:hover, +a.sidebar-nav-item:focus { + text-decoration: none; + background-color: rgba(255,255,255,.1); + border-color: transparent; +} + +@media (min-width: 48em) { + .sidebar-item { + padding: 1.5rem; + } + .sidebar-nav-item { + padding-left: 1.5rem; + padding-right: 1.5rem; + } +} + +/* Hide the sidebar checkbox that we toggle with `.sidebar-toggle` */ +.sidebar-checkbox { + position: absolute; + opacity: 0; + -webkit-user-select: none; + -moz-user-select: none; + user-select: none; +} + +/* Style the `label` that we use to target the `.sidebar-checkbox` */ +.sidebar-toggle { + position: absolute; + top: .8rem; + left: 1rem; + display: block; + padding: .25rem .75rem; + color: #505050; + background-color: #fff; + border-radius: .25rem; + cursor: pointer; +} + +.sidebar-toggle:before { + display: inline-block; + width: 1rem; + height: .75rem; + content: ""; + background-image: -webkit-linear-gradient(to bottom, #555, #555 20%, #fff 20%, #fff 40%, #555 40%, #555 60%, #fff 60%, #fff 80%, #555 80%, #555 100%); + background-image: -moz-linear-gradient(to bottom, #555, #555 20%, #fff 20%, #fff 40%, #555 40%, #555 60%, #fff 60%, #fff 80%, #555 80%, #555 100%); + background-image: -ms-linear-gradient(to bottom, #555, #555 20%, #fff 20%, #fff 40%, #555 40%, #555 60%, #fff 60%, #fff 80%, #555 80%, #555 100%); + background-image: linear-gradient(to bottom, #555, #555 20%, #fff 20%, #fff 40%, #555 40%, #555 60%, #fff 60%, #fff 80%, #555 80%, #555 100%); +} + +.sidebar-toggle:active, +#sidebar-checkbox:focus ~ .sidebar-toggle, +#sidebar-checkbox:checked ~ .sidebar-toggle { + color: #fff; + background-color: #555; +} + +.sidebar-toggle:active:before, +#sidebar-checkbox:focus ~ .sidebar-toggle:before, +#sidebar-checkbox:checked ~ .sidebar-toggle:before { + background-image: -webkit-linear-gradient(to bottom, #fff, #fff 20%, #555 20%, #555 40%, #fff 40%, #fff 60%, #555 60%, #555 80%, #fff 80%, #fff 100%); + background-image: -moz-linear-gradient(to bottom, #fff, #fff 20%, #555 20%, #555 40%, #fff 40%, #fff 60%, #555 60%, #555 80%, #fff 80%, #fff 100%); + background-image: -ms-linear-gradient(to bottom, #fff, #fff 20%, #555 20%, #555 40%, #fff 40%, #fff 60%, #555 60%, #555 80%, #fff 80%, #fff 100%); + background-image: linear-gradient(to bottom, #fff, #fff 20%, #555 20%, #555 40%, #fff 40%, #fff 60%, #555 60%, #555 80%, #fff 80%, #fff 100%); +} + +@media (min-width: 30.1em) { + .sidebar-toggle { + position: fixed; + } +} + +@media print { + .sidebar-toggle { + display: none; + } +} + +/* Slide effect + * + * Handle the sliding effects of the sidebar and content in one spot, seperate + * from the default styles. + * + * As an a heads up, we don't use `transform: translate3d()` here because when + * mixed with `position: fixed;` for the sidebar toggle, it creates a new + * containing block. Put simply, the fixed sidebar toggle behaves like + * `position: absolute;` when transformed. + * + * Read more about it at http://meyerweb.com/eric/thoughts/2011/09/12/. + */ + +.wrap, +.sidebar, +.sidebar-toggle { + -webkit-backface-visibility: hidden; + -ms-backface-visibility: hidden; + backface-visibility: hidden; +} +.wrap, +.sidebar-toggle { + -webkit-transition: -webkit-transform .3s ease-in-out; + transition: transform .3s ease-in-out; +} + +#sidebar-checkbox:checked + .sidebar { + z-index: 10; + visibility: visible; +} +#sidebar-checkbox:checked ~ .sidebar, +#sidebar-checkbox:checked ~ .wrap, +#sidebar-checkbox:checked ~ .sidebar-toggle { + -webkit-transform: translateX(14rem); + -ms-transform: translateX(14rem); + transform: translateX(14rem); +} + + +/* + * Posts and pages + * + * Each post is wrapped in `.post` and is used on default and post layouts. Each + * page is wrapped in `.page` and is only used on the page layout. + */ + +.page, +.post { + margin-bottom: 4em; +} + +/* Blog post or page title */ +.page-title, +.post-title, +.post-title a { + color: #303030; +} +.page-title, +.post-title { + margin-top: 0; +} + +/* Meta data line below post title */ +.post-date { + display: block; + margin-top: -.5rem; + margin-bottom: 1rem; + color: #9a9a9a; +} + +/* Related posts */ +.related { + padding-top: 2rem; + padding-bottom: 2rem; + border-top: 1px solid #eee; +} +.related-posts { + padding-left: 0; + list-style: none; +} +.related-posts h3 { + margin-top: 0; +} +.related-posts li small { + font-size: 75%; + color: #999; +} +.related-posts li a:hover { + color: #268bd2; + text-decoration: none; +} +.related-posts li a:hover small { + color: inherit; +} + + +/* + * Pagination + * + * Super lightweight (HTML-wise) blog pagination. `span`s are provide for when + * there are no more previous or next posts to show. + */ + +.pagination { + overflow: hidden; /* clearfix */ + margin-left: -1rem; + margin-right: -1rem; + font-family: "PT Sans", Helvetica, Arial, sans-serif; + color: #ccc; + text-align: center; +} + +/* Pagination items can be `span`s or `a`s */ +.pagination-item { + display: block; + padding: 1rem; + border: 1px solid #eee; +} +.pagination-item:first-child { + margin-bottom: -1px; +} + +/* Only provide a hover state for linked pagination items */ +a.pagination-item:hover { + background-color: #f5f5f5; +} + +@media (min-width: 30em) { + .pagination { + margin: 3rem 0; + } + .pagination-item { + float: left; + width: 50%; + } + .pagination-item:first-child { + margin-bottom: 0; + border-top-left-radius: 4px; + border-bottom-left-radius: 4px; + } + .pagination-item:last-child { + margin-left: -1px; + border-top-right-radius: 4px; + border-bottom-right-radius: 4px; + } +} + + +/* + * Reverse layout + * + * Flip the orientation of the page by placing the `.sidebar` and sidebar toggle + * on the right side. + */ + +.layout-reverse .sidebar { + left: auto; + right: -14rem; +} +.layout-reverse .sidebar-toggle { + left: auto; + right: 1rem; +} + +.layout-reverse #sidebar-checkbox:checked ~ .sidebar, +.layout-reverse #sidebar-checkbox:checked ~ .wrap, +.layout-reverse #sidebar-checkbox:checked ~ .sidebar-toggle { + -webkit-transform: translateX(-14rem); + -ms-transform: translateX(-14rem); + transform: translateX(-14rem); +} + + +/* + * Themes + * + * Apply custom color schemes by adding the appropriate class to the `body`. + * Based on colors from Base16: http://chriskempson.github.io/base16/#default. + */ + +/* Red */ +.theme-base-08 .sidebar, +.theme-base-08 .sidebar-toggle:active, +.theme-base-08 #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #ac4142; +} +.theme-base-08 .container a, +.theme-base-08 .sidebar-toggle, +.theme-base-08 .related-posts li a:hover { + color: #ac4142; +} + +/* Orange */ +.theme-base-09 .sidebar, +.theme-base-09 .sidebar-toggle:active, +.theme-base-09 #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #d28445; +} +.theme-base-09 .container a, +.theme-base-09 .sidebar-toggle, +.theme-base-09 .related-posts li a:hover { + color: #d28445; +} + +/* Yellow */ +.theme-base-0a .sidebar, +.theme-base-0a .sidebar-toggle:active, +.theme-base-0a #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #f4bf75; +} +.theme-base-0a .container a, +.theme-base-0a .sidebar-toggle, +.theme-base-0a .related-posts li a:hover { + color: #f4bf75; +} + +/* Green */ +.theme-base-0b .sidebar, +.theme-base-0b .sidebar-toggle:active, +.theme-base-0b #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #90a959; +} +.theme-base-0b .container a, +.theme-base-0b .sidebar-toggle, +.theme-base-0b .related-posts li a:hover { + color: #90a959; +} + +/* Cyan */ +.theme-base-0c .sidebar, +.theme-base-0c .sidebar-toggle:active, +.theme-base-0c #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #75b5aa; +} +.theme-base-0c .container a, +.theme-base-0c .sidebar-toggle, +.theme-base-0c .related-posts li a:hover { + color: #75b5aa; +} + +/* Blue */ +.theme-base-0d .sidebar, +.theme-base-0d .sidebar-toggle:active, +.theme-base-0d #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #6a9fb5; +} +.theme-base-0d .container a, +.theme-base-0d .sidebar-toggle, +.theme-base-0d .related-posts li a:hover { + color: #6a9fb5; +} + +/* Magenta */ +.theme-base-0e .sidebar, +.theme-base-0e .sidebar-toggle:active, +.theme-base-0e #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #aa759f; +} +.theme-base-0e .container a, +.theme-base-0e .sidebar-toggle, +.theme-base-0e .related-posts li a:hover { + color: #aa759f; +} + +/* Brown */ +.theme-base-0f .sidebar, +.theme-base-0f .sidebar-toggle:active, +.theme-base-0f #sidebar-checkbox:checked ~ .sidebar-toggle { + background-color: #8f5536; +} +.theme-base-0f .container a, +.theme-base-0f .sidebar-toggle, +.theme-base-0f .related-posts li a:hover { + color: #8f5536; +} + + +/* + * Overlay sidebar + * + * Make the sidebar content overlay the viewport content instead of pushing it + * aside when toggled. + */ + +.sidebar-overlay #sidebar-checkbox:checked ~ .wrap { + -webkit-transform: translateX(0); + -ms-transform: translateX(0); + transform: translateX(0); +} +.sidebar-overlay #sidebar-checkbox:checked ~ .sidebar-toggle { + box-shadow: 0 0 0 .25rem #fff; +} +.sidebar-overlay #sidebar-checkbox:checked ~ .sidebar { + box-shadow: .25rem 0 .5rem rgba(0,0,0,.1); +} + +/* Only one tweak for a reverse layout */ +.layout-reverse.sidebar-overlay #sidebar-checkbox:checked ~ .sidebar { + box-shadow: -.25rem 0 .5rem rgba(0,0,0,.1); +} diff --git a/_site/site/public/css/poole.css b/_site/site/public/css/poole.css new file mode 100755 index 00000000..8ec27e7a --- /dev/null +++ b/_site/site/public/css/poole.css @@ -0,0 +1,430 @@ +/* + * ___ + * /\_ \ + * _____ ___ ___\//\ \ __ + * /\ '__`\ / __`\ / __`\\ \ \ /'__`\ + * \ \ \_\ \/\ \_\ \/\ \_\ \\_\ \_/\ __/ + * \ \ ,__/\ \____/\ \____//\____\ \____\ + * \ \ \/ \/___/ \/___/ \/____/\/____/ + * \ \_\ + * \/_/ + * + * Designed, built, and released under MIT license by @mdo. Learn more at + * https://github.com/poole/poole. + */ + + +/* + * Contents + * + * Body resets + * Custom type + * Messages + * Container + * Masthead + * Posts and pages + * Pagination + * Reverse layout + * Themes + */ + + +/* + * Body resets + * + * Update the foundational and global aspects of the page. + */ + +* { + -webkit-box-sizing: border-box; + -moz-box-sizing: border-box; + box-sizing: border-box; +} + +html, +body { + margin: 0; + padding: 0; +} + +html { + font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; + font-size: 16px; + line-height: 1.5; +} +@media (min-width: 38em) { + html { + font-size: 20px; + } +} + +body { + color: #515151; + background-color: #fff; + -webkit-text-size-adjust: 100%; + -ms-text-size-adjust: 100%; +} + +/* No `:visited` state is required by default (browsers will use `a`) */ +a { + color: #268bd2; + text-decoration: none; +} +a strong { + color: inherit; +} +/* `:focus` is linked to `:hover` for basic accessibility */ +a:hover, +a:focus { + text-decoration: underline; +} + +/* Headings */ +h1, h2, h3, h4, h5, h6 { + margin-bottom: .5rem; + font-weight: bold; + line-height: 1.25; + color: #313131; + text-rendering: optimizeLegibility; +} +h1 { + font-size: 2rem; +} +h2 { + margin-top: 1rem; + font-size: 1.5rem; +} +h3 { + margin-top: 1.5rem; + font-size: 1.25rem; +} +h4, h5, h6 { + margin-top: 1rem; + font-size: 1rem; +} + +/* Body text */ +p { + margin-top: 0; + margin-bottom: 1rem; +} + +strong { + color: #303030; +} + + +/* Lists */ +ul, ol, dl { + margin-top: 0; + margin-bottom: 1rem; +} + +dt { + font-weight: bold; +} +dd { + margin-bottom: .5rem; +} + +/* Misc */ +hr { + position: relative; + margin: 1.5rem 0; + border: 0; + border-top: 1px solid #eee; + border-bottom: 1px solid #fff; +} + +abbr { + font-size: 85%; + font-weight: bold; + color: #555; + text-transform: uppercase; +} +abbr[title] { + cursor: help; + border-bottom: 1px dotted #e5e5e5; +} + +/* Code */ +code, +pre { + font-family: Menlo, Monaco, "Courier New", monospace; +} +code { + padding: .25em .5em; + font-size: 85%; + color: #bf616a; + background-color: #f9f9f9; + border-radius: 3px; +} +pre { + display: block; + margin-top: 0; + margin-bottom: 1rem; + padding: 1rem; + font-size: .8rem; + line-height: 1.4; + white-space: pre; + white-space: pre-wrap; + word-break: break-all; + word-wrap: break-word; + background-color: #f9f9f9; +} +pre code { + padding: 0; + font-size: 100%; + color: inherit; + background-color: transparent; +} + +/* Pygments via Jekyll */ +.highlight { + margin-bottom: 1rem; + border-radius: 4px; +} +.highlight pre { + margin-bottom: 0; +} + +/* Gist via GitHub Pages */ +.gist .gist-file { + font-family: Menlo, Monaco, "Courier New", monospace !important; +} +.gist .markdown-body { + padding: 15px; +} +.gist pre { + padding: 0; + background-color: transparent; +} +.gist .gist-file .gist-data { + font-size: .8rem !important; + line-height: 1.4; +} +.gist code { + padding: 0; + color: inherit; + background-color: transparent; + border-radius: 0; +} + +/* Quotes */ +blockquote { + padding: .5rem 1rem; + margin: .8rem 0; + color: #7a7a7a; + border-left: .25rem solid #e5e5e5; +} +blockquote p:last-child { + margin-bottom: 0; +} +@media (min-width: 30em) { + blockquote { + padding-right: 5rem; + padding-left: 1.25rem; + } +} + +img { + display: block; + max-width: 100%; + margin: 0 0 1rem; + border-radius: 5px; +} + +/* Tables */ +table { + margin-bottom: 1rem; + width: 100%; + border: 1px solid #e5e5e5; + border-collapse: collapse; +} +td, +th { + padding: .25rem .5rem; + border: 1px solid #e5e5e5; +} +tbody tr:nth-child(odd) td, +tbody tr:nth-child(odd) th { + background-color: #f9f9f9; +} + + +/* + * Custom type + * + * Extend paragraphs with `.lead` for larger introductory text. + */ + +.lead { + font-size: 1.25rem; + font-weight: 300; +} + + +/* + * Messages + * + * Show alert messages to users. You may add it to single elements like a `

`, + * or to a parent if there are multiple elements to show. + */ + +.message { + margin-bottom: 1rem; + padding: 1rem; + color: #717171; + background-color: #f9f9f9; +} + + +/* + * Container + * + * Center the page content. + */ + +.container { + max-width: 38rem; + padding-left: 1rem; + padding-right: 1rem; + margin-left: auto; + margin-right: auto; +} + + +/* + * Masthead + * + * Super small header above the content for site name and short description. + */ + +.masthead { + padding-top: 1rem; + padding-bottom: 1rem; + margin-bottom: 3rem; +} +.masthead-title { + margin-top: 0; + margin-bottom: 0; + color: #505050; +} +.masthead-title a { + color: #505050; +} +.masthead-title small { + font-size: 75%; + font-weight: 400; + color: #c0c0c0; + letter-spacing: 0; +} + + +/* + * Posts and pages + * + * Each post is wrapped in `.post` and is used on default and post layouts. Each + * page is wrapped in `.page` and is only used on the page layout. + */ + +.page, +.post { + margin-bottom: 4em; +} + +/* Blog post or page title */ +.page-title, +.post-title, +.post-title a { + color: #303030; +} +.page-title, +.post-title { + margin-top: 0; +} + +/* Meta data line below post title */ +.post-date { + display: block; + margin-top: -.5rem; + margin-bottom: 1rem; + color: #9a9a9a; +} + +/* Related posts */ +.related { + padding-top: 2rem; + padding-bottom: 2rem; + border-top: 1px solid #eee; +} +.related-posts { + padding-left: 0; + list-style: none; +} +.related-posts h3 { + margin-top: 0; +} +.related-posts li small { + font-size: 75%; + color: #999; +} +.related-posts li a:hover { + color: #268bd2; + text-decoration: none; +} +.related-posts li a:hover small { + color: inherit; +} + + +/* + * Pagination + * + * Super lightweight (HTML-wise) blog pagination. `span`s are provide for when + * there are no more previous or next posts to show. + */ + +.pagination { + overflow: hidden; /* clearfix */ + margin-left: -1rem; + margin-right: -1rem; + font-family: "PT Sans", Helvetica, Arial, sans-serif; + color: #ccc; + text-align: center; +} + +/* Pagination items can be `span`s or `a`s */ +.pagination-item { + display: block; + padding: 1rem; + border: 1px solid #eee; +} +.pagination-item:first-child { + margin-bottom: -1px; +} + +/* Only provide a hover state for linked pagination items */ +a.pagination-item:hover { + background-color: #f5f5f5; +} + +@media (min-width: 30em) { + .pagination { + margin: 3rem 0; + } + .pagination-item { + float: left; + width: 50%; + } + .pagination-item:first-child { + margin-bottom: 0; + border-top-left-radius: 4px; + border-bottom-left-radius: 4px; + } + .pagination-item:last-child { + margin-left: -1px; + border-top-right-radius: 4px; + border-bottom-right-radius: 4px; + } +} diff --git a/_site/site/public/css/style.css b/_site/site/public/css/style.css new file mode 100755 index 00000000..8013c531 --- /dev/null +++ b/_site/site/public/css/style.css @@ -0,0 +1,58 @@ +.tag-box { + list-style: none; + margin: 0; + padding: 4px 0; + overflow: hidden; + *zoom: 1; +} + +.tag-box:before, .tag-box:after { + display: table; + content: ""; + line-height: 0; +} + +.tag-box:after { + clear: both; +} + +.tag-box.inline li { + float: left; + font-size: 14px; + font-size: 0.875rem; + line-height: 2.5; +} + +.tag-box a { + padding: 4px 6px; + margin: 2px; + background-color: #e6e6e6; + -webkit-border-radius: 4px; + -moz-border-radius: 4px; + border-radius: 4px; + text-decoration: none; +} + +.tag-box a span { + vertical-align: super; + font-size: 10px; + font-size: 0.625rem; +} + +.sidebar .social-icons a { + color: rgba(255, 255, 255, 0.6); + padding-right: 0.75em; +} + +.sidebar .social-icons a:hover { + text-decoration: none; +} + + .page .social-icons { + text-align: center; +} + +.page .social-icons a { + color: #515151; + padding: 10px; +} \ No newline at end of file diff --git a/_site/site/public/css/syntax.css b/_site/site/public/css/syntax.css new file mode 100755 index 00000000..15ad7977 --- /dev/null +++ b/_site/site/public/css/syntax.css @@ -0,0 +1,65 @@ +.highlight .hll { background-color: #ffc; } +.highlight .c { color: #999; } /* Comment */ +.highlight .err { color: #a00; background-color: #faa } /* Error */ +.highlight .k { color: #069; } /* Keyword */ +.highlight .o { color: #555 } /* Operator */ +.highlight .cm { color: #09f; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #099 } /* Comment.Preproc */ +.highlight .c1 { color: #999; } /* Comment.Single */ +.highlight .cs { color: #999; } /* Comment.Special */ +.highlight .gd { background-color: #fcc; border: 1px solid #c00 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .gr { color: #f00 } /* Generic.Error */ +.highlight .gh { color: #030; } /* Generic.Heading */ +.highlight .gi { background-color: #cfc; border: 1px solid #0c0 } /* Generic.Inserted */ +.highlight .go { color: #aaa } /* Generic.Output */ +.highlight .gp { color: #009; } /* Generic.Prompt */ +.highlight .gs { } /* Generic.Strong */ +.highlight .gu { color: #030; } /* Generic.Subheading */ +.highlight .gt { color: #9c6 } /* Generic.Traceback */ +.highlight .kc { color: #069; } /* Keyword.Constant */ +.highlight .kd { color: #069; } /* Keyword.Declaration */ +.highlight .kn { color: #069; } /* Keyword.Namespace */ +.highlight .kp { color: #069 } /* Keyword.Pseudo */ +.highlight .kr { color: #069; } /* Keyword.Reserved */ +.highlight .kt { color: #078; } /* Keyword.Type */ +.highlight .m { color: #f60 } /* Literal.Number */ +.highlight .s { color: #d44950 } /* Literal.String */ +.highlight .na { color: #4f9fcf } /* Name.Attribute */ +.highlight .nb { color: #366 } /* Name.Builtin */ +.highlight .nc { color: #0a8; } /* Name.Class */ +.highlight .no { color: #360 } /* Name.Constant */ +.highlight .nd { color: #99f } /* Name.Decorator */ +.highlight .ni { color: #999; } /* Name.Entity */ +.highlight .ne { color: #c00; } /* Name.Exception */ +.highlight .nf { color: #c0f } /* Name.Function */ +.highlight .nl { color: #99f } /* Name.Label */ +.highlight .nn { color: #0cf; } /* Name.Namespace */ +.highlight .nt { color: #2f6f9f; } /* Name.Tag */ +.highlight .nv { color: #033 } /* Name.Variable */ +.highlight .ow { color: #000; } /* Operator.Word */ +.highlight .w { color: #bbb } /* Text.Whitespace */ +.highlight .mf { color: #f60 } /* Literal.Number.Float */ +.highlight .mh { color: #f60 } /* Literal.Number.Hex */ +.highlight .mi { color: #f60 } /* Literal.Number.Integer */ +.highlight .mo { color: #f60 } /* Literal.Number.Oct */ +.highlight .sb { color: #c30 } /* Literal.String.Backtick */ +.highlight .sc { color: #c30 } /* Literal.String.Char */ +.highlight .sd { color: #c30; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #c30 } /* Literal.String.Double */ +.highlight .se { color: #c30; } /* Literal.String.Escape */ +.highlight .sh { color: #c30 } /* Literal.String.Heredoc */ +.highlight .si { color: #a00 } /* Literal.String.Interpol */ +.highlight .sx { color: #c30 } /* Literal.String.Other */ +.highlight .sr { color: #3aa } /* Literal.String.Regex */ +.highlight .s1 { color: #c30 } /* Literal.String.Single */ +.highlight .ss { color: #fc3 } /* Literal.String.Symbol */ +.highlight .bp { color: #366 } /* Name.Builtin.Pseudo */ +.highlight .vc { color: #033 } /* Name.Variable.Class */ +.highlight .vg { color: #033 } /* Name.Variable.Global */ +.highlight .vi { color: #033 } /* Name.Variable.Instance */ +.highlight .il { color: #f60 } /* Literal.Number.Integer.Long */ + +.css .o, +.css .o + .nt, +.css .nt + .nt { color: #999; } diff --git a/_site/site/public/favicon.ico b/_site/site/public/favicon.ico new file mode 100755 index 0000000000000000000000000000000000000000..9aa5f1942104dee63977c19cad295ca13db2783d GIT binary patch literal 1150 zcmb`HO(;ZB6vwZ!vzb!XmbNHFO3B8?UMb1M&VD9F(vYwtnVF59vU!Ru)GX|TQWiFn ztc1Mpc^}^WJNG@N>D?#Oj9%y7cg{V(ckcP_zbuh-dNXSH%$W9w$ zbk^P$NvPAJX1bjwL?;$ah=?uo*&gUy6EI72FblKL*2ZL?h?97;kVc@t0-rvD_h$s! z3%~<-cjwTS2jkz7`C=OOQI33vs=mhFG`MTQDVM6Y?|VypVm@?N|Na90!(~IfwmK5# z;oY9q&5QckN*?aD0&abPI~I(c$-nt*FWf5yR5Z%jhW~g&c^9E?jK^7_OZl4z3&aAI zlTE0LeMwBMP2%n2^^jMW`fL4udzW})cZzr(#iuzE{m^m)unsd2b&T~#;>CI8bQ|1r ysBS8Pw li { + position: relative; +} +.fa-li { + position: absolute; + left: -2.14285714em; + width: 2.14285714em; + top: 0.14285714em; + text-align: center; +} +.fa-li.fa-lg { + left: -1.85714286em; +} +.fa-border { + padding: .2em .25em .15em; + border: solid 0.08em #eeeeee; + border-radius: .1em; +} +.fa-pull-left { + float: left; +} +.fa-pull-right { + float: right; +} +.fa.fa-pull-left { + margin-right: .3em; +} +.fa.fa-pull-right { + margin-left: .3em; +} +/* Deprecated as of 4.4.0 */ +.pull-right { + float: right; +} +.pull-left { + float: left; +} +.fa.pull-left { + margin-right: .3em; +} +.fa.pull-right { + margin-left: .3em; +} +.fa-spin { + -webkit-animation: fa-spin 2s infinite linear; + animation: fa-spin 2s infinite linear; +} +.fa-pulse { + -webkit-animation: fa-spin 1s infinite steps(8); + animation: fa-spin 1s infinite steps(8); +} +@-webkit-keyframes fa-spin { + 0% { + -webkit-transform: rotate(0deg); + transform: rotate(0deg); + } + 100% { + -webkit-transform: rotate(359deg); + transform: rotate(359deg); + } +} +@keyframes fa-spin { + 0% { + -webkit-transform: rotate(0deg); + transform: rotate(0deg); + } + 100% { + -webkit-transform: rotate(359deg); + transform: rotate(359deg); + } +} +.fa-rotate-90 { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=1)"; + -webkit-transform: rotate(90deg); + -ms-transform: rotate(90deg); + transform: rotate(90deg); +} +.fa-rotate-180 { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2)"; + -webkit-transform: rotate(180deg); + -ms-transform: rotate(180deg); + transform: rotate(180deg); +} +.fa-rotate-270 { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=3)"; + -webkit-transform: rotate(270deg); + -ms-transform: rotate(270deg); + transform: rotate(270deg); +} +.fa-flip-horizontal { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)"; + -webkit-transform: scale(-1, 1); + -ms-transform: scale(-1, 1); + transform: scale(-1, 1); +} +.fa-flip-vertical { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)"; + -webkit-transform: scale(1, -1); + -ms-transform: scale(1, -1); + transform: scale(1, -1); +} +:root .fa-rotate-90, +:root .fa-rotate-180, +:root .fa-rotate-270, +:root .fa-flip-horizontal, +:root .fa-flip-vertical { + filter: none; +} +.fa-stack { + position: relative; + display: inline-block; + width: 2em; + height: 2em; + line-height: 2em; + vertical-align: middle; +} +.fa-stack-1x, +.fa-stack-2x { + position: absolute; + left: 0; + width: 100%; + text-align: center; +} +.fa-stack-1x { + line-height: inherit; +} +.fa-stack-2x { + font-size: 2em; +} +.fa-inverse { + color: #ffffff; +} +/* Font Awesome uses the Unicode Private Use Area (PUA) to ensure screen + readers do not read off random characters that represent icons */ +.fa-glass:before { + content: "\f000"; +} +.fa-music:before { + content: "\f001"; +} +.fa-search:before { + content: "\f002"; +} +.fa-envelope-o:before { + content: "\f003"; +} +.fa-heart:before { + content: "\f004"; +} +.fa-star:before { + content: "\f005"; +} +.fa-star-o:before { + content: "\f006"; +} +.fa-user:before { + content: "\f007"; +} +.fa-film:before { + content: "\f008"; +} +.fa-th-large:before { + content: "\f009"; +} +.fa-th:before { + content: "\f00a"; +} +.fa-th-list:before { + content: "\f00b"; +} +.fa-check:before { + content: "\f00c"; +} +.fa-remove:before, +.fa-close:before, +.fa-times:before { + content: "\f00d"; +} +.fa-search-plus:before { + content: "\f00e"; +} +.fa-search-minus:before { + content: "\f010"; +} +.fa-power-off:before { + content: "\f011"; +} +.fa-signal:before { + content: "\f012"; +} +.fa-gear:before, +.fa-cog:before { + content: "\f013"; +} +.fa-trash-o:before { + content: "\f014"; +} +.fa-home:before { + content: "\f015"; +} +.fa-file-o:before { + content: "\f016"; +} +.fa-clock-o:before { + content: "\f017"; +} +.fa-road:before { + content: "\f018"; +} +.fa-download:before { + content: "\f019"; +} +.fa-arrow-circle-o-down:before { + content: "\f01a"; +} +.fa-arrow-circle-o-up:before { + content: "\f01b"; +} +.fa-inbox:before { + content: "\f01c"; +} +.fa-play-circle-o:before { + content: "\f01d"; +} +.fa-rotate-right:before, +.fa-repeat:before { + content: "\f01e"; +} +.fa-refresh:before { + content: "\f021"; +} +.fa-list-alt:before { + content: "\f022"; +} +.fa-lock:before { + content: "\f023"; +} +.fa-flag:before { + content: "\f024"; +} +.fa-headphones:before { + content: "\f025"; +} +.fa-volume-off:before { + content: "\f026"; +} +.fa-volume-down:before { + content: "\f027"; +} +.fa-volume-up:before { + content: "\f028"; +} +.fa-qrcode:before { + content: "\f029"; +} +.fa-barcode:before { + content: "\f02a"; +} +.fa-tag:before { + content: "\f02b"; +} +.fa-tags:before { + content: "\f02c"; +} +.fa-book:before { + content: "\f02d"; +} +.fa-bookmark:before { + content: "\f02e"; +} +.fa-print:before { + content: "\f02f"; +} +.fa-camera:before { + content: "\f030"; +} +.fa-font:before { + content: "\f031"; +} +.fa-bold:before { + content: "\f032"; +} +.fa-italic:before { + content: "\f033"; +} +.fa-text-height:before { + content: "\f034"; +} +.fa-text-width:before { + content: "\f035"; +} +.fa-align-left:before { + content: "\f036"; +} +.fa-align-center:before { + content: "\f037"; +} +.fa-align-right:before { + content: "\f038"; +} +.fa-align-justify:before { + content: "\f039"; +} +.fa-list:before { + content: "\f03a"; +} +.fa-dedent:before, +.fa-outdent:before { + content: "\f03b"; +} +.fa-indent:before { + content: "\f03c"; +} +.fa-video-camera:before { + content: "\f03d"; +} +.fa-photo:before, +.fa-image:before, +.fa-picture-o:before { + content: "\f03e"; +} +.fa-pencil:before { + content: "\f040"; +} +.fa-map-marker:before { + content: "\f041"; +} +.fa-adjust:before { + content: "\f042"; +} +.fa-tint:before { + content: "\f043"; +} +.fa-edit:before, +.fa-pencil-square-o:before { + content: "\f044"; +} +.fa-share-square-o:before { + content: "\f045"; +} +.fa-check-square-o:before { + content: "\f046"; +} +.fa-arrows:before { + content: "\f047"; +} +.fa-step-backward:before { + content: "\f048"; +} +.fa-fast-backward:before { + content: "\f049"; +} +.fa-backward:before { + content: "\f04a"; +} +.fa-play:before { + content: "\f04b"; +} +.fa-pause:before { + content: "\f04c"; +} +.fa-stop:before { + content: "\f04d"; +} +.fa-forward:before { + content: "\f04e"; +} +.fa-fast-forward:before { + content: "\f050"; +} +.fa-step-forward:before { + content: "\f051"; +} +.fa-eject:before { + content: "\f052"; +} +.fa-chevron-left:before { + content: "\f053"; +} +.fa-chevron-right:before { + content: "\f054"; +} +.fa-plus-circle:before { + content: "\f055"; +} +.fa-minus-circle:before { + content: "\f056"; +} +.fa-times-circle:before { + content: "\f057"; +} +.fa-check-circle:before { + content: "\f058"; +} +.fa-question-circle:before { + content: "\f059"; +} +.fa-info-circle:before { + content: "\f05a"; +} +.fa-crosshairs:before { + content: "\f05b"; +} +.fa-times-circle-o:before { + content: "\f05c"; +} +.fa-check-circle-o:before { + content: "\f05d"; +} +.fa-ban:before { + content: "\f05e"; +} +.fa-arrow-left:before { + content: "\f060"; +} +.fa-arrow-right:before { + content: "\f061"; +} +.fa-arrow-up:before { + content: "\f062"; +} +.fa-arrow-down:before { + content: "\f063"; +} +.fa-mail-forward:before, +.fa-share:before { + content: "\f064"; +} +.fa-expand:before { + content: "\f065"; +} +.fa-compress:before { + content: "\f066"; +} +.fa-plus:before { + content: "\f067"; +} +.fa-minus:before { + content: "\f068"; +} +.fa-asterisk:before { + content: "\f069"; +} +.fa-exclamation-circle:before { + content: "\f06a"; +} +.fa-gift:before { + content: "\f06b"; +} +.fa-leaf:before { + content: "\f06c"; +} +.fa-fire:before { + content: "\f06d"; +} +.fa-eye:before { + content: "\f06e"; +} +.fa-eye-slash:before { + content: "\f070"; +} +.fa-warning:before, +.fa-exclamation-triangle:before { + content: "\f071"; +} +.fa-plane:before { + content: "\f072"; +} +.fa-calendar:before { + content: "\f073"; +} +.fa-random:before { + content: "\f074"; +} +.fa-comment:before { + content: "\f075"; +} +.fa-magnet:before { + content: "\f076"; +} +.fa-chevron-up:before { + content: "\f077"; +} +.fa-chevron-down:before { + content: "\f078"; +} +.fa-retweet:before { + content: "\f079"; +} +.fa-shopping-cart:before { + content: "\f07a"; +} +.fa-folder:before { + content: "\f07b"; +} +.fa-folder-open:before { + content: "\f07c"; +} +.fa-arrows-v:before { + content: "\f07d"; +} +.fa-arrows-h:before { + content: "\f07e"; +} +.fa-bar-chart-o:before, +.fa-bar-chart:before { + content: "\f080"; +} +.fa-twitter-square:before { + content: "\f081"; +} +.fa-facebook-square:before { + content: "\f082"; +} +.fa-camera-retro:before { + content: "\f083"; +} +.fa-key:before { + content: "\f084"; +} +.fa-gears:before, +.fa-cogs:before { + content: "\f085"; +} +.fa-comments:before { + content: "\f086"; +} +.fa-thumbs-o-up:before { + content: "\f087"; +} +.fa-thumbs-o-down:before { + content: "\f088"; +} +.fa-star-half:before { + content: "\f089"; +} +.fa-heart-o:before { + content: "\f08a"; +} +.fa-sign-out:before { + content: "\f08b"; +} +.fa-linkedin-square:before { + content: "\f08c"; +} +.fa-thumb-tack:before { + content: "\f08d"; +} +.fa-external-link:before { + content: "\f08e"; +} +.fa-sign-in:before { + content: "\f090"; +} +.fa-trophy:before { + content: "\f091"; +} +.fa-github-square:before { + content: "\f092"; +} +.fa-upload:before { + content: "\f093"; +} +.fa-lemon-o:before { + content: "\f094"; +} +.fa-phone:before { + content: "\f095"; +} +.fa-square-o:before { + content: "\f096"; +} +.fa-bookmark-o:before { + content: "\f097"; +} +.fa-phone-square:before { + content: "\f098"; +} +.fa-twitter:before { + content: "\f099"; +} +.fa-facebook-f:before, +.fa-facebook:before { + content: "\f09a"; +} +.fa-github:before { + content: "\f09b"; +} +.fa-unlock:before { + content: "\f09c"; +} +.fa-credit-card:before { + content: "\f09d"; +} +.fa-feed:before, +.fa-rss:before { + content: "\f09e"; +} +.fa-hdd-o:before { + content: "\f0a0"; +} +.fa-bullhorn:before { + content: "\f0a1"; +} +.fa-bell:before { + content: "\f0f3"; +} +.fa-certificate:before { + content: "\f0a3"; +} +.fa-hand-o-right:before { + content: "\f0a4"; +} +.fa-hand-o-left:before { + content: "\f0a5"; +} +.fa-hand-o-up:before { + content: "\f0a6"; +} +.fa-hand-o-down:before { + content: "\f0a7"; +} +.fa-arrow-circle-left:before { + content: "\f0a8"; +} +.fa-arrow-circle-right:before { + content: "\f0a9"; +} +.fa-arrow-circle-up:before { + content: "\f0aa"; +} +.fa-arrow-circle-down:before { + content: "\f0ab"; +} +.fa-globe:before { + content: "\f0ac"; +} +.fa-wrench:before { + content: "\f0ad"; +} +.fa-tasks:before { + content: "\f0ae"; +} +.fa-filter:before { + content: "\f0b0"; +} +.fa-briefcase:before { + content: "\f0b1"; +} +.fa-arrows-alt:before { + content: "\f0b2"; +} +.fa-group:before, +.fa-users:before { + content: "\f0c0"; +} +.fa-chain:before, +.fa-link:before { + content: "\f0c1"; +} +.fa-cloud:before { + content: "\f0c2"; +} +.fa-flask:before { + content: "\f0c3"; +} +.fa-cut:before, +.fa-scissors:before { + content: "\f0c4"; +} +.fa-copy:before, +.fa-files-o:before { + content: "\f0c5"; +} +.fa-paperclip:before { + content: "\f0c6"; +} +.fa-save:before, +.fa-floppy-o:before { + content: "\f0c7"; +} +.fa-square:before { + content: "\f0c8"; +} +.fa-navicon:before, +.fa-reorder:before, +.fa-bars:before { + content: "\f0c9"; +} +.fa-list-ul:before { + content: "\f0ca"; +} +.fa-list-ol:before { + content: "\f0cb"; +} +.fa-strikethrough:before { + content: "\f0cc"; +} +.fa-underline:before { + content: "\f0cd"; +} +.fa-table:before { + content: "\f0ce"; +} +.fa-magic:before { + content: "\f0d0"; +} +.fa-truck:before { + content: "\f0d1"; +} +.fa-pinterest:before { + content: "\f0d2"; +} +.fa-pinterest-square:before { + content: "\f0d3"; +} +.fa-google-plus-square:before { + content: "\f0d4"; +} +.fa-google-plus:before { + content: "\f0d5"; +} +.fa-money:before { + content: "\f0d6"; +} +.fa-caret-down:before { + content: "\f0d7"; +} +.fa-caret-up:before { + content: "\f0d8"; +} +.fa-caret-left:before { + content: "\f0d9"; +} +.fa-caret-right:before { + content: "\f0da"; +} +.fa-columns:before { + content: "\f0db"; +} +.fa-unsorted:before, +.fa-sort:before { + content: "\f0dc"; +} +.fa-sort-down:before, +.fa-sort-desc:before { + content: "\f0dd"; +} +.fa-sort-up:before, +.fa-sort-asc:before { + content: "\f0de"; +} +.fa-envelope:before { + content: "\f0e0"; +} +.fa-linkedin:before { + content: "\f0e1"; +} +.fa-rotate-left:before, +.fa-undo:before { + content: "\f0e2"; +} +.fa-legal:before, +.fa-gavel:before { + content: "\f0e3"; +} +.fa-dashboard:before, +.fa-tachometer:before { + content: "\f0e4"; +} +.fa-comment-o:before { + content: "\f0e5"; +} +.fa-comments-o:before { + content: "\f0e6"; +} +.fa-flash:before, +.fa-bolt:before { + content: "\f0e7"; +} +.fa-sitemap:before { + content: "\f0e8"; +} +.fa-umbrella:before { + content: "\f0e9"; +} +.fa-paste:before, +.fa-clipboard:before { + content: "\f0ea"; +} +.fa-lightbulb-o:before { + content: "\f0eb"; +} +.fa-exchange:before { + content: "\f0ec"; +} +.fa-cloud-download:before { + content: "\f0ed"; +} +.fa-cloud-upload:before { + content: "\f0ee"; +} +.fa-user-md:before { + content: "\f0f0"; +} +.fa-stethoscope:before { + content: "\f0f1"; +} +.fa-suitcase:before { + content: "\f0f2"; +} +.fa-bell-o:before { + content: "\f0a2"; +} +.fa-coffee:before { + content: "\f0f4"; +} +.fa-cutlery:before { + content: "\f0f5"; +} +.fa-file-text-o:before { + content: "\f0f6"; +} +.fa-building-o:before { + content: "\f0f7"; +} +.fa-hospital-o:before { + content: "\f0f8"; +} +.fa-ambulance:before { + content: "\f0f9"; +} +.fa-medkit:before { + content: "\f0fa"; +} +.fa-fighter-jet:before { + content: "\f0fb"; +} +.fa-beer:before { + content: "\f0fc"; +} +.fa-h-square:before { + content: "\f0fd"; +} +.fa-plus-square:before { + content: "\f0fe"; +} +.fa-angle-double-left:before { + content: "\f100"; +} +.fa-angle-double-right:before { + content: "\f101"; +} +.fa-angle-double-up:before { + content: "\f102"; +} +.fa-angle-double-down:before { + content: "\f103"; +} +.fa-angle-left:before { + content: "\f104"; +} +.fa-angle-right:before { + content: "\f105"; +} +.fa-angle-up:before { + content: "\f106"; +} +.fa-angle-down:before { + content: "\f107"; +} +.fa-desktop:before { + content: "\f108"; +} +.fa-laptop:before { + content: "\f109"; +} +.fa-tablet:before { + content: "\f10a"; +} +.fa-mobile-phone:before, +.fa-mobile:before { + content: "\f10b"; +} +.fa-circle-o:before { + content: "\f10c"; +} +.fa-quote-left:before { + content: "\f10d"; +} +.fa-quote-right:before { + content: "\f10e"; +} +.fa-spinner:before { + content: "\f110"; +} +.fa-circle:before { + content: "\f111"; +} +.fa-mail-reply:before, +.fa-reply:before { + content: "\f112"; +} +.fa-github-alt:before { + content: "\f113"; +} +.fa-folder-o:before { + content: "\f114"; +} +.fa-folder-open-o:before { + content: "\f115"; +} +.fa-smile-o:before { + content: "\f118"; +} +.fa-frown-o:before { + content: "\f119"; +} +.fa-meh-o:before { + content: "\f11a"; +} +.fa-gamepad:before { + content: "\f11b"; +} +.fa-keyboard-o:before { + content: "\f11c"; +} +.fa-flag-o:before { + content: "\f11d"; +} +.fa-flag-checkered:before { + content: "\f11e"; +} +.fa-terminal:before { + content: "\f120"; +} +.fa-code:before { + content: "\f121"; +} +.fa-mail-reply-all:before, +.fa-reply-all:before { + content: "\f122"; +} +.fa-star-half-empty:before, +.fa-star-half-full:before, +.fa-star-half-o:before { + content: "\f123"; +} +.fa-location-arrow:before { + content: "\f124"; +} +.fa-crop:before { + content: "\f125"; +} +.fa-code-fork:before { + content: "\f126"; +} +.fa-unlink:before, +.fa-chain-broken:before { + content: "\f127"; +} +.fa-question:before { + content: "\f128"; +} +.fa-info:before { + content: "\f129"; +} +.fa-exclamation:before { + content: "\f12a"; +} +.fa-superscript:before { + content: "\f12b"; +} +.fa-subscript:before { + content: "\f12c"; +} +.fa-eraser:before { + content: "\f12d"; +} +.fa-puzzle-piece:before { + content: "\f12e"; +} +.fa-microphone:before { + content: "\f130"; +} +.fa-microphone-slash:before { + content: "\f131"; +} +.fa-shield:before { + content: "\f132"; +} +.fa-calendar-o:before { + content: "\f133"; +} +.fa-fire-extinguisher:before { + content: "\f134"; +} +.fa-rocket:before { + content: "\f135"; +} +.fa-maxcdn:before { + content: "\f136"; +} +.fa-chevron-circle-left:before { + content: "\f137"; +} +.fa-chevron-circle-right:before { + content: "\f138"; +} +.fa-chevron-circle-up:before { + content: "\f139"; +} +.fa-chevron-circle-down:before { + content: "\f13a"; +} +.fa-html5:before { + content: "\f13b"; +} +.fa-css3:before { + content: "\f13c"; +} +.fa-anchor:before { + content: "\f13d"; +} +.fa-unlock-alt:before { + content: "\f13e"; +} +.fa-bullseye:before { + content: "\f140"; +} +.fa-ellipsis-h:before { + content: "\f141"; +} +.fa-ellipsis-v:before { + content: "\f142"; +} +.fa-rss-square:before { + content: "\f143"; +} +.fa-play-circle:before { + content: "\f144"; +} +.fa-ticket:before { + content: "\f145"; +} +.fa-minus-square:before { + content: "\f146"; +} +.fa-minus-square-o:before { + content: "\f147"; +} +.fa-level-up:before { + content: "\f148"; +} +.fa-level-down:before { + content: "\f149"; +} +.fa-check-square:before { + content: "\f14a"; +} +.fa-pencil-square:before { + content: "\f14b"; +} +.fa-external-link-square:before { + content: "\f14c"; +} +.fa-share-square:before { + content: "\f14d"; +} +.fa-compass:before { + content: "\f14e"; +} +.fa-toggle-down:before, +.fa-caret-square-o-down:before { + content: "\f150"; +} +.fa-toggle-up:before, +.fa-caret-square-o-up:before { + content: "\f151"; +} +.fa-toggle-right:before, +.fa-caret-square-o-right:before { + content: "\f152"; +} +.fa-euro:before, +.fa-eur:before { + content: "\f153"; +} +.fa-gbp:before { + content: "\f154"; +} +.fa-dollar:before, +.fa-usd:before { + content: "\f155"; +} +.fa-rupee:before, +.fa-inr:before { + content: "\f156"; +} +.fa-cny:before, +.fa-rmb:before, +.fa-yen:before, +.fa-jpy:before { + content: "\f157"; +} +.fa-ruble:before, +.fa-rouble:before, +.fa-rub:before { + content: "\f158"; +} +.fa-won:before, +.fa-krw:before { + content: "\f159"; +} +.fa-bitcoin:before, +.fa-btc:before { + content: "\f15a"; +} +.fa-file:before { + content: "\f15b"; +} +.fa-file-text:before { + content: "\f15c"; +} +.fa-sort-alpha-asc:before { + content: "\f15d"; +} +.fa-sort-alpha-desc:before { + content: "\f15e"; +} +.fa-sort-amount-asc:before { + content: "\f160"; +} +.fa-sort-amount-desc:before { + content: "\f161"; +} +.fa-sort-numeric-asc:before { + content: "\f162"; +} +.fa-sort-numeric-desc:before { + content: "\f163"; +} +.fa-thumbs-up:before { + content: "\f164"; +} +.fa-thumbs-down:before { + content: "\f165"; +} +.fa-youtube-square:before { + content: "\f166"; +} +.fa-youtube:before { + content: "\f167"; +} +.fa-xing:before { + content: "\f168"; +} +.fa-xing-square:before { + content: "\f169"; +} +.fa-youtube-play:before { + content: "\f16a"; +} +.fa-dropbox:before { + content: "\f16b"; +} +.fa-stack-overflow:before { + content: "\f16c"; +} +.fa-instagram:before { + content: "\f16d"; +} +.fa-flickr:before { + content: "\f16e"; +} +.fa-adn:before { + content: "\f170"; +} +.fa-bitbucket:before { + content: "\f171"; +} +.fa-bitbucket-square:before { + content: "\f172"; +} +.fa-tumblr:before { + content: "\f173"; +} +.fa-tumblr-square:before { + content: "\f174"; +} +.fa-long-arrow-down:before { + content: "\f175"; +} +.fa-long-arrow-up:before { + content: "\f176"; +} +.fa-long-arrow-left:before { + content: "\f177"; +} +.fa-long-arrow-right:before { + content: "\f178"; +} +.fa-apple:before { + content: "\f179"; +} +.fa-windows:before { + content: "\f17a"; +} +.fa-android:before { + content: "\f17b"; +} +.fa-linux:before { + content: "\f17c"; +} +.fa-dribbble:before { + content: "\f17d"; +} +.fa-skype:before { + content: "\f17e"; +} +.fa-foursquare:before { + content: "\f180"; +} +.fa-trello:before { + content: "\f181"; +} +.fa-female:before { + content: "\f182"; +} +.fa-male:before { + content: "\f183"; +} +.fa-gittip:before, +.fa-gratipay:before { + content: "\f184"; +} +.fa-sun-o:before { + content: "\f185"; +} +.fa-moon-o:before { + content: "\f186"; +} +.fa-archive:before { + content: "\f187"; +} +.fa-bug:before { + content: "\f188"; +} +.fa-vk:before { + content: "\f189"; +} +.fa-weibo:before { + content: "\f18a"; +} +.fa-renren:before { + content: "\f18b"; +} +.fa-pagelines:before { + content: "\f18c"; +} +.fa-stack-exchange:before { + content: "\f18d"; +} +.fa-arrow-circle-o-right:before { + content: "\f18e"; +} +.fa-arrow-circle-o-left:before { + content: "\f190"; +} +.fa-toggle-left:before, +.fa-caret-square-o-left:before { + content: "\f191"; +} +.fa-dot-circle-o:before { + content: "\f192"; +} +.fa-wheelchair:before { + content: "\f193"; +} +.fa-vimeo-square:before { + content: "\f194"; +} +.fa-turkish-lira:before, +.fa-try:before { + content: "\f195"; +} +.fa-plus-square-o:before { + content: "\f196"; +} +.fa-space-shuttle:before { + content: "\f197"; +} +.fa-slack:before { + content: "\f198"; +} +.fa-envelope-square:before { + content: "\f199"; +} +.fa-wordpress:before { + content: "\f19a"; +} +.fa-openid:before { + content: "\f19b"; +} +.fa-institution:before, +.fa-bank:before, +.fa-university:before { + content: "\f19c"; +} +.fa-mortar-board:before, +.fa-graduation-cap:before { + content: "\f19d"; +} +.fa-yahoo:before { + content: "\f19e"; +} +.fa-google:before { + content: "\f1a0"; +} +.fa-reddit:before { + content: "\f1a1"; +} +.fa-reddit-square:before { + content: "\f1a2"; +} +.fa-stumbleupon-circle:before { + content: "\f1a3"; +} +.fa-stumbleupon:before { + content: "\f1a4"; +} +.fa-delicious:before { + content: "\f1a5"; +} +.fa-digg:before { + content: "\f1a6"; +} +.fa-pied-piper-pp:before { + content: "\f1a7"; +} +.fa-pied-piper-alt:before { + content: "\f1a8"; +} +.fa-drupal:before { + content: "\f1a9"; +} +.fa-joomla:before { + content: "\f1aa"; +} +.fa-language:before { + content: "\f1ab"; +} +.fa-fax:before { + content: "\f1ac"; +} +.fa-building:before { + content: "\f1ad"; +} +.fa-child:before { + content: "\f1ae"; +} +.fa-paw:before { + content: "\f1b0"; +} +.fa-spoon:before { + content: "\f1b1"; +} +.fa-cube:before { + content: "\f1b2"; +} +.fa-cubes:before { + content: "\f1b3"; +} +.fa-behance:before { + content: "\f1b4"; +} +.fa-behance-square:before { + content: "\f1b5"; +} +.fa-steam:before { + content: "\f1b6"; +} +.fa-steam-square:before { + content: "\f1b7"; +} +.fa-recycle:before { + content: "\f1b8"; +} +.fa-automobile:before, +.fa-car:before { + content: "\f1b9"; +} +.fa-cab:before, +.fa-taxi:before { + content: "\f1ba"; +} +.fa-tree:before { + content: "\f1bb"; +} +.fa-spotify:before { + content: "\f1bc"; +} +.fa-deviantart:before { + content: "\f1bd"; +} +.fa-soundcloud:before { + content: "\f1be"; +} +.fa-database:before { + content: "\f1c0"; +} +.fa-file-pdf-o:before { + content: "\f1c1"; +} +.fa-file-word-o:before { + content: "\f1c2"; +} +.fa-file-excel-o:before { + content: "\f1c3"; +} +.fa-file-powerpoint-o:before { + content: "\f1c4"; +} +.fa-file-photo-o:before, +.fa-file-picture-o:before, +.fa-file-image-o:before { + content: "\f1c5"; +} +.fa-file-zip-o:before, +.fa-file-archive-o:before { + content: "\f1c6"; +} +.fa-file-sound-o:before, +.fa-file-audio-o:before { + content: "\f1c7"; +} +.fa-file-movie-o:before, +.fa-file-video-o:before { + content: "\f1c8"; +} +.fa-file-code-o:before { + content: "\f1c9"; +} +.fa-vine:before { + content: "\f1ca"; +} +.fa-codepen:before { + content: "\f1cb"; +} +.fa-jsfiddle:before { + content: "\f1cc"; +} +.fa-life-bouy:before, +.fa-life-buoy:before, +.fa-life-saver:before, +.fa-support:before, +.fa-life-ring:before { + content: "\f1cd"; +} +.fa-circle-o-notch:before { + content: "\f1ce"; +} +.fa-ra:before, +.fa-resistance:before, +.fa-rebel:before { + content: "\f1d0"; +} +.fa-ge:before, +.fa-empire:before { + content: "\f1d1"; +} +.fa-git-square:before { + content: "\f1d2"; +} +.fa-git:before { + content: "\f1d3"; +} +.fa-y-combinator-square:before, +.fa-yc-square:before, +.fa-hacker-news:before { + content: "\f1d4"; +} +.fa-tencent-weibo:before { + content: "\f1d5"; +} +.fa-qq:before { + content: "\f1d6"; +} +.fa-wechat:before, +.fa-weixin:before { + content: "\f1d7"; +} +.fa-send:before, +.fa-paper-plane:before { + content: "\f1d8"; +} +.fa-send-o:before, +.fa-paper-plane-o:before { + content: "\f1d9"; +} +.fa-history:before { + content: "\f1da"; +} +.fa-circle-thin:before { + content: "\f1db"; +} +.fa-header:before { + content: "\f1dc"; +} +.fa-paragraph:before { + content: "\f1dd"; +} +.fa-sliders:before { + content: "\f1de"; +} +.fa-share-alt:before { + content: "\f1e0"; +} +.fa-share-alt-square:before { + content: "\f1e1"; +} +.fa-bomb:before { + content: "\f1e2"; +} +.fa-soccer-ball-o:before, +.fa-futbol-o:before { + content: "\f1e3"; +} +.fa-tty:before { + content: "\f1e4"; +} +.fa-binoculars:before { + content: "\f1e5"; +} +.fa-plug:before { + content: "\f1e6"; +} +.fa-slideshare:before { + content: "\f1e7"; +} +.fa-twitch:before { + content: "\f1e8"; +} +.fa-yelp:before { + content: "\f1e9"; +} +.fa-newspaper-o:before { + content: "\f1ea"; +} +.fa-wifi:before { + content: "\f1eb"; +} +.fa-calculator:before { + content: "\f1ec"; +} +.fa-paypal:before { + content: "\f1ed"; +} +.fa-google-wallet:before { + content: "\f1ee"; +} +.fa-cc-visa:before { + content: "\f1f0"; +} +.fa-cc-mastercard:before { + content: "\f1f1"; +} +.fa-cc-discover:before { + content: "\f1f2"; +} +.fa-cc-amex:before { + content: "\f1f3"; +} +.fa-cc-paypal:before { + content: "\f1f4"; +} +.fa-cc-stripe:before { + content: "\f1f5"; +} +.fa-bell-slash:before { + content: "\f1f6"; +} +.fa-bell-slash-o:before { + content: "\f1f7"; +} +.fa-trash:before { + content: "\f1f8"; +} +.fa-copyright:before { + content: "\f1f9"; +} +.fa-at:before { + content: "\f1fa"; +} +.fa-eyedropper:before { + content: "\f1fb"; +} +.fa-paint-brush:before { + content: "\f1fc"; +} +.fa-birthday-cake:before { + content: "\f1fd"; +} +.fa-area-chart:before { + content: "\f1fe"; +} +.fa-pie-chart:before { + content: "\f200"; +} +.fa-line-chart:before { + content: "\f201"; +} +.fa-lastfm:before { + content: "\f202"; +} +.fa-lastfm-square:before { + content: "\f203"; +} +.fa-toggle-off:before { + content: "\f204"; +} +.fa-toggle-on:before { + content: "\f205"; +} +.fa-bicycle:before { + content: "\f206"; +} +.fa-bus:before { + content: "\f207"; +} +.fa-ioxhost:before { + content: "\f208"; +} +.fa-angellist:before { + content: "\f209"; +} +.fa-cc:before { + content: "\f20a"; +} +.fa-shekel:before, +.fa-sheqel:before, +.fa-ils:before { + content: "\f20b"; +} +.fa-meanpath:before { + content: "\f20c"; +} +.fa-buysellads:before { + content: "\f20d"; +} +.fa-connectdevelop:before { + content: "\f20e"; +} +.fa-dashcube:before { + content: "\f210"; +} +.fa-forumbee:before { + content: "\f211"; +} +.fa-leanpub:before { + content: "\f212"; +} +.fa-sellsy:before { + content: "\f213"; +} +.fa-shirtsinbulk:before { + content: "\f214"; +} +.fa-simplybuilt:before { + content: "\f215"; +} +.fa-skyatlas:before { + content: "\f216"; +} +.fa-cart-plus:before { + content: "\f217"; +} +.fa-cart-arrow-down:before { + content: "\f218"; +} +.fa-diamond:before { + content: "\f219"; +} +.fa-ship:before { + content: "\f21a"; +} +.fa-user-secret:before { + content: "\f21b"; +} +.fa-motorcycle:before { + content: "\f21c"; +} +.fa-street-view:before { + content: "\f21d"; +} +.fa-heartbeat:before { + content: "\f21e"; +} +.fa-venus:before { + content: "\f221"; +} +.fa-mars:before { + content: "\f222"; +} +.fa-mercury:before { + content: "\f223"; +} +.fa-intersex:before, +.fa-transgender:before { + content: "\f224"; +} +.fa-transgender-alt:before { + content: "\f225"; +} +.fa-venus-double:before { + content: "\f226"; +} +.fa-mars-double:before { + content: "\f227"; +} +.fa-venus-mars:before { + content: "\f228"; +} +.fa-mars-stroke:before { + content: "\f229"; +} +.fa-mars-stroke-v:before { + content: "\f22a"; +} +.fa-mars-stroke-h:before { + content: "\f22b"; +} +.fa-neuter:before { + content: "\f22c"; +} +.fa-genderless:before { + content: "\f22d"; +} +.fa-facebook-official:before { + content: "\f230"; +} +.fa-pinterest-p:before { + content: "\f231"; +} +.fa-whatsapp:before { + content: "\f232"; +} +.fa-server:before { + content: "\f233"; +} +.fa-user-plus:before { + content: "\f234"; +} +.fa-user-times:before { + content: "\f235"; +} +.fa-hotel:before, +.fa-bed:before { + content: "\f236"; +} +.fa-viacoin:before { + content: "\f237"; +} +.fa-train:before { + content: "\f238"; +} +.fa-subway:before { + content: "\f239"; +} +.fa-medium:before { + content: "\f23a"; +} +.fa-yc:before, +.fa-y-combinator:before { + content: "\f23b"; +} +.fa-optin-monster:before { + content: "\f23c"; +} +.fa-opencart:before { + content: "\f23d"; +} +.fa-expeditedssl:before { + content: "\f23e"; +} +.fa-battery-4:before, +.fa-battery:before, +.fa-battery-full:before { + content: "\f240"; +} +.fa-battery-3:before, +.fa-battery-three-quarters:before { + content: "\f241"; +} +.fa-battery-2:before, +.fa-battery-half:before { + content: "\f242"; +} +.fa-battery-1:before, +.fa-battery-quarter:before { + content: "\f243"; +} +.fa-battery-0:before, +.fa-battery-empty:before { + content: "\f244"; +} +.fa-mouse-pointer:before { + content: "\f245"; +} +.fa-i-cursor:before { + content: "\f246"; +} +.fa-object-group:before { + content: "\f247"; +} +.fa-object-ungroup:before { + content: "\f248"; +} +.fa-sticky-note:before { + content: "\f249"; +} +.fa-sticky-note-o:before { + content: "\f24a"; +} +.fa-cc-jcb:before { + content: "\f24b"; +} +.fa-cc-diners-club:before { + content: "\f24c"; +} +.fa-clone:before { + content: "\f24d"; +} +.fa-balance-scale:before { + content: "\f24e"; +} +.fa-hourglass-o:before { + content: "\f250"; +} +.fa-hourglass-1:before, +.fa-hourglass-start:before { + content: "\f251"; +} +.fa-hourglass-2:before, +.fa-hourglass-half:before { + content: "\f252"; +} +.fa-hourglass-3:before, +.fa-hourglass-end:before { + content: "\f253"; +} +.fa-hourglass:before { + content: "\f254"; +} +.fa-hand-grab-o:before, +.fa-hand-rock-o:before { + content: "\f255"; +} +.fa-hand-stop-o:before, +.fa-hand-paper-o:before { + content: "\f256"; +} +.fa-hand-scissors-o:before { + content: "\f257"; +} +.fa-hand-lizard-o:before { + content: "\f258"; +} +.fa-hand-spock-o:before { + content: "\f259"; +} +.fa-hand-pointer-o:before { + content: "\f25a"; +} +.fa-hand-peace-o:before { + content: "\f25b"; +} +.fa-trademark:before { + content: "\f25c"; +} +.fa-registered:before { + content: "\f25d"; +} +.fa-creative-commons:before { + content: "\f25e"; +} +.fa-gg:before { + content: "\f260"; +} +.fa-gg-circle:before { + content: "\f261"; +} +.fa-tripadvisor:before { + content: "\f262"; +} +.fa-odnoklassniki:before { + content: "\f263"; +} +.fa-odnoklassniki-square:before { + content: "\f264"; +} +.fa-get-pocket:before { + content: "\f265"; +} +.fa-wikipedia-w:before { + content: "\f266"; +} +.fa-safari:before { + content: "\f267"; +} +.fa-chrome:before { + content: "\f268"; +} +.fa-firefox:before { + content: "\f269"; +} +.fa-opera:before { + content: "\f26a"; +} +.fa-internet-explorer:before { + content: "\f26b"; +} +.fa-tv:before, +.fa-television:before { + content: "\f26c"; +} +.fa-contao:before { + content: "\f26d"; +} +.fa-500px:before { + content: "\f26e"; +} +.fa-amazon:before { + content: "\f270"; +} +.fa-calendar-plus-o:before { + content: "\f271"; +} +.fa-calendar-minus-o:before { + content: "\f272"; +} +.fa-calendar-times-o:before { + content: "\f273"; +} +.fa-calendar-check-o:before { + content: "\f274"; +} +.fa-industry:before { + content: "\f275"; +} +.fa-map-pin:before { + content: "\f276"; +} +.fa-map-signs:before { + content: "\f277"; +} +.fa-map-o:before { + content: "\f278"; +} +.fa-map:before { + content: "\f279"; +} +.fa-commenting:before { + content: "\f27a"; +} +.fa-commenting-o:before { + content: "\f27b"; +} +.fa-houzz:before { + content: "\f27c"; +} +.fa-vimeo:before { + content: "\f27d"; +} +.fa-black-tie:before { + content: "\f27e"; +} +.fa-fonticons:before { + content: "\f280"; +} +.fa-reddit-alien:before { + content: "\f281"; +} +.fa-edge:before { + content: "\f282"; +} +.fa-credit-card-alt:before { + content: "\f283"; +} +.fa-codiepie:before { + content: "\f284"; +} +.fa-modx:before { + content: "\f285"; +} +.fa-fort-awesome:before { + content: "\f286"; +} +.fa-usb:before { + content: "\f287"; +} +.fa-product-hunt:before { + content: "\f288"; +} +.fa-mixcloud:before { + content: "\f289"; +} +.fa-scribd:before { + content: "\f28a"; +} +.fa-pause-circle:before { + content: "\f28b"; +} +.fa-pause-circle-o:before { + content: "\f28c"; +} +.fa-stop-circle:before { + content: "\f28d"; +} +.fa-stop-circle-o:before { + content: "\f28e"; +} +.fa-shopping-bag:before { + content: "\f290"; +} +.fa-shopping-basket:before { + content: "\f291"; +} +.fa-hashtag:before { + content: "\f292"; +} +.fa-bluetooth:before { + content: "\f293"; +} +.fa-bluetooth-b:before { + content: "\f294"; +} +.fa-percent:before { + content: "\f295"; +} +.fa-gitlab:before { + content: "\f296"; +} +.fa-wpbeginner:before { + content: "\f297"; +} +.fa-wpforms:before { + content: "\f298"; +} +.fa-envira:before { + content: "\f299"; +} +.fa-universal-access:before { + content: "\f29a"; +} +.fa-wheelchair-alt:before { + content: "\f29b"; +} +.fa-question-circle-o:before { + content: "\f29c"; +} +.fa-blind:before { + content: "\f29d"; +} +.fa-audio-description:before { + content: "\f29e"; +} +.fa-volume-control-phone:before { + content: "\f2a0"; +} +.fa-braille:before { + content: "\f2a1"; +} +.fa-assistive-listening-systems:before { + content: "\f2a2"; +} +.fa-asl-interpreting:before, +.fa-american-sign-language-interpreting:before { + content: "\f2a3"; +} +.fa-deafness:before, +.fa-hard-of-hearing:before, +.fa-deaf:before { + content: "\f2a4"; +} +.fa-glide:before { + content: "\f2a5"; +} +.fa-glide-g:before { + content: "\f2a6"; +} +.fa-signing:before, +.fa-sign-language:before { + content: "\f2a7"; +} +.fa-low-vision:before { + content: "\f2a8"; +} +.fa-viadeo:before { + content: "\f2a9"; +} +.fa-viadeo-square:before { + content: "\f2aa"; +} +.fa-snapchat:before { + content: "\f2ab"; +} +.fa-snapchat-ghost:before { + content: "\f2ac"; +} +.fa-snapchat-square:before { + content: "\f2ad"; +} +.fa-pied-piper:before { + content: "\f2ae"; +} +.fa-first-order:before { + content: "\f2b0"; +} +.fa-yoast:before { + content: "\f2b1"; +} +.fa-themeisle:before { + content: "\f2b2"; +} +.fa-google-plus-circle:before, +.fa-google-plus-official:before { + content: "\f2b3"; +} +.fa-fa:before, +.fa-font-awesome:before { + content: "\f2b4"; +} +.fa-handshake-o:before { + content: "\f2b5"; +} +.fa-envelope-open:before { + content: "\f2b6"; +} +.fa-envelope-open-o:before { + content: "\f2b7"; +} +.fa-linode:before { + content: "\f2b8"; +} +.fa-address-book:before { + content: "\f2b9"; +} +.fa-address-book-o:before { + content: "\f2ba"; +} +.fa-vcard:before, +.fa-address-card:before { + content: "\f2bb"; +} +.fa-vcard-o:before, +.fa-address-card-o:before { + content: "\f2bc"; +} +.fa-user-circle:before { + content: "\f2bd"; +} +.fa-user-circle-o:before { + content: "\f2be"; +} +.fa-user-o:before { + content: "\f2c0"; +} +.fa-id-badge:before { + content: "\f2c1"; +} +.fa-drivers-license:before, +.fa-id-card:before { + content: "\f2c2"; +} +.fa-drivers-license-o:before, +.fa-id-card-o:before { + content: "\f2c3"; +} +.fa-quora:before { + content: "\f2c4"; +} +.fa-free-code-camp:before { + content: "\f2c5"; +} +.fa-telegram:before { + content: "\f2c6"; +} +.fa-thermometer-4:before, +.fa-thermometer:before, +.fa-thermometer-full:before { + content: "\f2c7"; +} +.fa-thermometer-3:before, +.fa-thermometer-three-quarters:before { + content: "\f2c8"; +} +.fa-thermometer-2:before, +.fa-thermometer-half:before { + content: "\f2c9"; +} +.fa-thermometer-1:before, +.fa-thermometer-quarter:before { + content: "\f2ca"; +} +.fa-thermometer-0:before, +.fa-thermometer-empty:before { + content: "\f2cb"; +} +.fa-shower:before { + content: "\f2cc"; +} +.fa-bathtub:before, +.fa-s15:before, +.fa-bath:before { + content: "\f2cd"; +} +.fa-podcast:before { + content: "\f2ce"; +} +.fa-window-maximize:before { + content: "\f2d0"; +} +.fa-window-minimize:before { + content: "\f2d1"; +} +.fa-window-restore:before { + content: "\f2d2"; +} +.fa-times-rectangle:before, +.fa-window-close:before { + content: "\f2d3"; +} +.fa-times-rectangle-o:before, +.fa-window-close-o:before { + content: "\f2d4"; +} +.fa-bandcamp:before { + content: "\f2d5"; +} +.fa-grav:before { + content: "\f2d6"; +} +.fa-etsy:before { + content: "\f2d7"; +} +.fa-imdb:before { + content: "\f2d8"; +} +.fa-ravelry:before { + content: "\f2d9"; +} +.fa-eercast:before { + content: "\f2da"; +} +.fa-microchip:before { + content: "\f2db"; +} +.fa-snowflake-o:before { + content: "\f2dc"; +} +.fa-superpowers:before { + content: "\f2dd"; +} +.fa-wpexplorer:before { + content: "\f2de"; +} +.fa-meetup:before { + content: "\f2e0"; +} +.sr-only { + position: absolute; + width: 1px; + height: 1px; + padding: 0; + margin: -1px; + overflow: hidden; + clip: rect(0, 0, 0, 0); + border: 0; +} +.sr-only-focusable:active, +.sr-only-focusable:focus { + position: static; + width: auto; + height: auto; + margin: 0; + overflow: visible; + clip: auto; +} diff --git a/_site/site/public/font-awesome-4.7.0/css/font-awesome.min.css b/_site/site/public/font-awesome-4.7.0/css/font-awesome.min.css new file mode 100755 index 00000000..540440ce --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/css/font-awesome.min.css @@ -0,0 +1,4 @@ +/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:'FontAwesome';src:url('../fonts/fontawesome-webfont.eot?v=4.7.0');src:url('../fonts/fontawesome-webfont.eot?#iefix&v=4.7.0') format('embedded-opentype'),url('../fonts/fontawesome-webfont.woff2?v=4.7.0') format('woff2'),url('../fonts/fontawesome-webfont.woff?v=4.7.0') format('woff'),url('../fonts/fontawesome-webfont.ttf?v=4.7.0') format('truetype'),url('../fonts/fontawesome-webfont.svg?v=4.7.0#fontawesomeregular') format('svg');font-weight:normal;font-style:normal}.fa{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571429em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14285714em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14285714em;width:2.14285714em;top:.14285714em;text-align:center}.fa-li.fa-lg{left:-1.85714286em}.fa-border{padding:.2em .25em .15em;border:solid .08em #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left{margin-right:.3em}.fa.fa-pull-right{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left{margin-right:.3em}.fa.pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s infinite linear;animation:fa-spin 2s infinite linear}.fa-pulse{-webkit-animation:fa-spin 1s infinite steps(8);animation:fa-spin 1s infinite steps(8)}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scale(-1, 1);-ms-transform:scale(-1, 1);transform:scale(-1, 1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scale(1, -1);-ms-transform:scale(1, -1);transform:scale(1, -1)}:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270,:root .fa-flip-horizontal,:root .fa-flip-vertical{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:"\f000"}.fa-music:before{content:"\f001"}.fa-search:before{content:"\f002"}.fa-envelope-o:before{content:"\f003"}.fa-heart:before{content:"\f004"}.fa-star:before{content:"\f005"}.fa-star-o:before{content:"\f006"}.fa-user:before{content:"\f007"}.fa-film:before{content:"\f008"}.fa-th-large:before{content:"\f009"}.fa-th:before{content:"\f00a"}.fa-th-list:before{content:"\f00b"}.fa-check:before{content:"\f00c"}.fa-remove:before,.fa-close:before,.fa-times:before{content:"\f00d"}.fa-search-plus:before{content:"\f00e"}.fa-search-minus:before{content:"\f010"}.fa-power-off:before{content:"\f011"}.fa-signal:before{content:"\f012"}.fa-gear:before,.fa-cog:before{content:"\f013"}.fa-trash-o:before{content:"\f014"}.fa-home:before{content:"\f015"}.fa-file-o:before{content:"\f016"}.fa-clock-o:before{content:"\f017"}.fa-road:before{content:"\f018"}.fa-download:before{content:"\f019"}.fa-arrow-circle-o-down:before{content:"\f01a"}.fa-arrow-circle-o-up:before{content:"\f01b"}.fa-inbox:before{content:"\f01c"}.fa-play-circle-o:before{content:"\f01d"}.fa-rotate-right:before,.fa-repeat:before{content:"\f01e"}.fa-refresh:before{content:"\f021"}.fa-list-alt:before{content:"\f022"}.fa-lock:before{content:"\f023"}.fa-flag:before{content:"\f024"}.fa-headphones:before{content:"\f025"}.fa-volume-off:before{content:"\f026"}.fa-volume-down:before{content:"\f027"}.fa-volume-up:before{content:"\f028"}.fa-qrcode:before{content:"\f029"}.fa-barcode:before{content:"\f02a"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-book:before{content:"\f02d"}.fa-bookmark:before{content:"\f02e"}.fa-print:before{content:"\f02f"}.fa-camera:before{content:"\f030"}.fa-font:before{content:"\f031"}.fa-bold:before{content:"\f032"}.fa-italic:before{content:"\f033"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-align-left:before{content:"\f036"}.fa-align-center:before{content:"\f037"}.fa-align-right:before{content:"\f038"}.fa-align-justify:before{content:"\f039"}.fa-list:before{content:"\f03a"}.fa-dedent:before,.fa-outdent:before{content:"\f03b"}.fa-indent:before{content:"\f03c"}.fa-video-camera:before{content:"\f03d"}.fa-photo:before,.fa-image:before,.fa-picture-o:before{content:"\f03e"}.fa-pencil:before{content:"\f040"}.fa-map-marker:before{content:"\f041"}.fa-adjust:before{content:"\f042"}.fa-tint:before{content:"\f043"}.fa-edit:before,.fa-pencil-square-o:before{content:"\f044"}.fa-share-square-o:before{content:"\f045"}.fa-check-square-o:before{content:"\f046"}.fa-arrows:before{content:"\f047"}.fa-step-backward:before{content:"\f048"}.fa-fast-backward:before{content:"\f049"}.fa-backward:before{content:"\f04a"}.fa-play:before{content:"\f04b"}.fa-pause:before{content:"\f04c"}.fa-stop:before{content:"\f04d"}.fa-forward:before{content:"\f04e"}.fa-fast-forward:before{content:"\f050"}.fa-step-forward:before{content:"\f051"}.fa-eject:before{content:"\f052"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-plus-circle:before{content:"\f055"}.fa-minus-circle:before{content:"\f056"}.fa-times-circle:before{content:"\f057"}.fa-check-circle:before{content:"\f058"}.fa-question-circle:before{content:"\f059"}.fa-info-circle:before{content:"\f05a"}.fa-crosshairs:before{content:"\f05b"}.fa-times-circle-o:before{content:"\f05c"}.fa-check-circle-o:before{content:"\f05d"}.fa-ban:before{content:"\f05e"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrow-down:before{content:"\f063"}.fa-mail-forward:before,.fa-share:before{content:"\f064"}.fa-expand:before{content:"\f065"}.fa-compress:before{content:"\f066"}.fa-plus:before{content:"\f067"}.fa-minus:before{content:"\f068"}.fa-asterisk:before{content:"\f069"}.fa-exclamation-circle:before{content:"\f06a"}.fa-gift:before{content:"\f06b"}.fa-leaf:before{content:"\f06c"}.fa-fire:before{content:"\f06d"}.fa-eye:before{content:"\f06e"}.fa-eye-slash:before{content:"\f070"}.fa-warning:before,.fa-exclamation-triangle:before{content:"\f071"}.fa-plane:before{content:"\f072"}.fa-calendar:before{content:"\f073"}.fa-random:before{content:"\f074"}.fa-comment:before{content:"\f075"}.fa-magnet:before{content:"\f076"}.fa-chevron-up:before{content:"\f077"}.fa-chevron-down:before{content:"\f078"}.fa-retweet:before{content:"\f079"}.fa-shopping-cart:before{content:"\f07a"}.fa-folder:before{content:"\f07b"}.fa-folder-open:before{content:"\f07c"}.fa-arrows-v:before{content:"\f07d"}.fa-arrows-h:before{content:"\f07e"}.fa-bar-chart-o:before,.fa-bar-chart:before{content:"\f080"}.fa-twitter-square:before{content:"\f081"}.fa-facebook-square:before{content:"\f082"}.fa-camera-retro:before{content:"\f083"}.fa-key:before{content:"\f084"}.fa-gears:before,.fa-cogs:before{content:"\f085"}.fa-comments:before{content:"\f086"}.fa-thumbs-o-up:before{content:"\f087"}.fa-thumbs-o-down:before{content:"\f088"}.fa-star-half:before{content:"\f089"}.fa-heart-o:before{content:"\f08a"}.fa-sign-out:before{content:"\f08b"}.fa-linkedin-square:before{content:"\f08c"}.fa-thumb-tack:before{content:"\f08d"}.fa-external-link:before{content:"\f08e"}.fa-sign-in:before{content:"\f090"}.fa-trophy:before{content:"\f091"}.fa-github-square:before{content:"\f092"}.fa-upload:before{content:"\f093"}.fa-lemon-o:before{content:"\f094"}.fa-phone:before{content:"\f095"}.fa-square-o:before{content:"\f096"}.fa-bookmark-o:before{content:"\f097"}.fa-phone-square:before{content:"\f098"}.fa-twitter:before{content:"\f099"}.fa-facebook-f:before,.fa-facebook:before{content:"\f09a"}.fa-github:before{content:"\f09b"}.fa-unlock:before{content:"\f09c"}.fa-credit-card:before{content:"\f09d"}.fa-feed:before,.fa-rss:before{content:"\f09e"}.fa-hdd-o:before{content:"\f0a0"}.fa-bullhorn:before{content:"\f0a1"}.fa-bell:before{content:"\f0f3"}.fa-certificate:before{content:"\f0a3"}.fa-hand-o-right:before{content:"\f0a4"}.fa-hand-o-left:before{content:"\f0a5"}.fa-hand-o-up:before{content:"\f0a6"}.fa-hand-o-down:before{content:"\f0a7"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-globe:before{content:"\f0ac"}.fa-wrench:before{content:"\f0ad"}.fa-tasks:before{content:"\f0ae"}.fa-filter:before{content:"\f0b0"}.fa-briefcase:before{content:"\f0b1"}.fa-arrows-alt:before{content:"\f0b2"}.fa-group:before,.fa-users:before{content:"\f0c0"}.fa-chain:before,.fa-link:before{content:"\f0c1"}.fa-cloud:before{content:"\f0c2"}.fa-flask:before{content:"\f0c3"}.fa-cut:before,.fa-scissors:before{content:"\f0c4"}.fa-copy:before,.fa-files-o:before{content:"\f0c5"}.fa-paperclip:before{content:"\f0c6"}.fa-save:before,.fa-floppy-o:before{content:"\f0c7"}.fa-square:before{content:"\f0c8"}.fa-navicon:before,.fa-reorder:before,.fa-bars:before{content:"\f0c9"}.fa-list-ul:before{content:"\f0ca"}.fa-list-ol:before{content:"\f0cb"}.fa-strikethrough:before{content:"\f0cc"}.fa-underline:before{content:"\f0cd"}.fa-table:before{content:"\f0ce"}.fa-magic:before{content:"\f0d0"}.fa-truck:before{content:"\f0d1"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-square:before{content:"\f0d3"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-plus:before{content:"\f0d5"}.fa-money:before{content:"\f0d6"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-up:before{content:"\f0d8"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-columns:before{content:"\f0db"}.fa-unsorted:before,.fa-sort:before{content:"\f0dc"}.fa-sort-down:before,.fa-sort-desc:before{content:"\f0dd"}.fa-sort-up:before,.fa-sort-asc:before{content:"\f0de"}.fa-envelope:before{content:"\f0e0"}.fa-linkedin:before{content:"\f0e1"}.fa-rotate-left:before,.fa-undo:before{content:"\f0e2"}.fa-legal:before,.fa-gavel:before{content:"\f0e3"}.fa-dashboard:before,.fa-tachometer:before{content:"\f0e4"}.fa-comment-o:before{content:"\f0e5"}.fa-comments-o:before{content:"\f0e6"}.fa-flash:before,.fa-bolt:before{content:"\f0e7"}.fa-sitemap:before{content:"\f0e8"}.fa-umbrella:before{content:"\f0e9"}.fa-paste:before,.fa-clipboard:before{content:"\f0ea"}.fa-lightbulb-o:before{content:"\f0eb"}.fa-exchange:before{content:"\f0ec"}.fa-cloud-download:before{content:"\f0ed"}.fa-cloud-upload:before{content:"\f0ee"}.fa-user-md:before{content:"\f0f0"}.fa-stethoscope:before{content:"\f0f1"}.fa-suitcase:before{content:"\f0f2"}.fa-bell-o:before{content:"\f0a2"}.fa-coffee:before{content:"\f0f4"}.fa-cutlery:before{content:"\f0f5"}.fa-file-text-o:before{content:"\f0f6"}.fa-building-o:before{content:"\f0f7"}.fa-hospital-o:before{content:"\f0f8"}.fa-ambulance:before{content:"\f0f9"}.fa-medkit:before{content:"\f0fa"}.fa-fighter-jet:before{content:"\f0fb"}.fa-beer:before{content:"\f0fc"}.fa-h-square:before{content:"\f0fd"}.fa-plus-square:before{content:"\f0fe"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angle-down:before{content:"\f107"}.fa-desktop:before{content:"\f108"}.fa-laptop:before{content:"\f109"}.fa-tablet:before{content:"\f10a"}.fa-mobile-phone:before,.fa-mobile:before{content:"\f10b"}.fa-circle-o:before{content:"\f10c"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-spinner:before{content:"\f110"}.fa-circle:before{content:"\f111"}.fa-mail-reply:before,.fa-reply:before{content:"\f112"}.fa-github-alt:before{content:"\f113"}.fa-folder-o:before{content:"\f114"}.fa-folder-open-o:before{content:"\f115"}.fa-smile-o:before{content:"\f118"}.fa-frown-o:before{content:"\f119"}.fa-meh-o:before{content:"\f11a"}.fa-gamepad:before{content:"\f11b"}.fa-keyboard-o:before{content:"\f11c"}.fa-flag-o:before{content:"\f11d"}.fa-flag-checkered:before{content:"\f11e"}.fa-terminal:before{content:"\f120"}.fa-code:before{content:"\f121"}.fa-mail-reply-all:before,.fa-reply-all:before{content:"\f122"}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:"\f123"}.fa-location-arrow:before{content:"\f124"}.fa-crop:before{content:"\f125"}.fa-code-fork:before{content:"\f126"}.fa-unlink:before,.fa-chain-broken:before{content:"\f127"}.fa-question:before{content:"\f128"}.fa-info:before{content:"\f129"}.fa-exclamation:before{content:"\f12a"}.fa-superscript:before{content:"\f12b"}.fa-subscript:before{content:"\f12c"}.fa-eraser:before{content:"\f12d"}.fa-puzzle-piece:before{content:"\f12e"}.fa-microphone:before{content:"\f130"}.fa-microphone-slash:before{content:"\f131"}.fa-shield:before{content:"\f132"}.fa-calendar-o:before{content:"\f133"}.fa-fire-extinguisher:before{content:"\f134"}.fa-rocket:before{content:"\f135"}.fa-maxcdn:before{content:"\f136"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-html5:before{content:"\f13b"}.fa-css3:before{content:"\f13c"}.fa-anchor:before{content:"\f13d"}.fa-unlock-alt:before{content:"\f13e"}.fa-bullseye:before{content:"\f140"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-rss-square:before{content:"\f143"}.fa-play-circle:before{content:"\f144"}.fa-ticket:before{content:"\f145"}.fa-minus-square:before{content:"\f146"}.fa-minus-square-o:before{content:"\f147"}.fa-level-up:before{content:"\f148"}.fa-level-down:before{content:"\f149"}.fa-check-square:before{content:"\f14a"}.fa-pencil-square:before{content:"\f14b"}.fa-external-link-square:before{content:"\f14c"}.fa-share-square:before{content:"\f14d"}.fa-compass:before{content:"\f14e"}.fa-toggle-down:before,.fa-caret-square-o-down:before{content:"\f150"}.fa-toggle-up:before,.fa-caret-square-o-up:before{content:"\f151"}.fa-toggle-right:before,.fa-caret-square-o-right:before{content:"\f152"}.fa-euro:before,.fa-eur:before{content:"\f153"}.fa-gbp:before{content:"\f154"}.fa-dollar:before,.fa-usd:before{content:"\f155"}.fa-rupee:before,.fa-inr:before{content:"\f156"}.fa-cny:before,.fa-rmb:before,.fa-yen:before,.fa-jpy:before{content:"\f157"}.fa-ruble:before,.fa-rouble:before,.fa-rub:before{content:"\f158"}.fa-won:before,.fa-krw:before{content:"\f159"}.fa-bitcoin:before,.fa-btc:before{content:"\f15a"}.fa-file:before{content:"\f15b"}.fa-file-text:before{content:"\f15c"}.fa-sort-alpha-asc:before{content:"\f15d"}.fa-sort-alpha-desc:before{content:"\f15e"}.fa-sort-amount-asc:before{content:"\f160"}.fa-sort-amount-desc:before{content:"\f161"}.fa-sort-numeric-asc:before{content:"\f162"}.fa-sort-numeric-desc:before{content:"\f163"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbs-down:before{content:"\f165"}.fa-youtube-square:before{content:"\f166"}.fa-youtube:before{content:"\f167"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-youtube-play:before{content:"\f16a"}.fa-dropbox:before{content:"\f16b"}.fa-stack-overflow:before{content:"\f16c"}.fa-instagram:before{content:"\f16d"}.fa-flickr:before{content:"\f16e"}.fa-adn:before{content:"\f170"}.fa-bitbucket:before{content:"\f171"}.fa-bitbucket-square:before{content:"\f172"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-long-arrow-down:before{content:"\f175"}.fa-long-arrow-up:before{content:"\f176"}.fa-long-arrow-left:before{content:"\f177"}.fa-long-arrow-right:before{content:"\f178"}.fa-apple:before{content:"\f179"}.fa-windows:before{content:"\f17a"}.fa-android:before{content:"\f17b"}.fa-linux:before{content:"\f17c"}.fa-dribbble:before{content:"\f17d"}.fa-skype:before{content:"\f17e"}.fa-foursquare:before{content:"\f180"}.fa-trello:before{content:"\f181"}.fa-female:before{content:"\f182"}.fa-male:before{content:"\f183"}.fa-gittip:before,.fa-gratipay:before{content:"\f184"}.fa-sun-o:before{content:"\f185"}.fa-moon-o:before{content:"\f186"}.fa-archive:before{content:"\f187"}.fa-bug:before{content:"\f188"}.fa-vk:before{content:"\f189"}.fa-weibo:before{content:"\f18a"}.fa-renren:before{content:"\f18b"}.fa-pagelines:before{content:"\f18c"}.fa-stack-exchange:before{content:"\f18d"}.fa-arrow-circle-o-right:before{content:"\f18e"}.fa-arrow-circle-o-left:before{content:"\f190"}.fa-toggle-left:before,.fa-caret-square-o-left:before{content:"\f191"}.fa-dot-circle-o:before{content:"\f192"}.fa-wheelchair:before{content:"\f193"}.fa-vimeo-square:before{content:"\f194"}.fa-turkish-lira:before,.fa-try:before{content:"\f195"}.fa-plus-square-o:before{content:"\f196"}.fa-space-shuttle:before{content:"\f197"}.fa-slack:before{content:"\f198"}.fa-envelope-square:before{content:"\f199"}.fa-wordpress:before{content:"\f19a"}.fa-openid:before{content:"\f19b"}.fa-institution:before,.fa-bank:before,.fa-university:before{content:"\f19c"}.fa-mortar-board:before,.fa-graduation-cap:before{content:"\f19d"}.fa-yahoo:before{content:"\f19e"}.fa-google:before{content:"\f1a0"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-square:before{content:"\f1a2"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-stumbleupon:before{content:"\f1a4"}.fa-delicious:before{content:"\f1a5"}.fa-digg:before{content:"\f1a6"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-drupal:before{content:"\f1a9"}.fa-joomla:before{content:"\f1aa"}.fa-language:before{content:"\f1ab"}.fa-fax:before{content:"\f1ac"}.fa-building:before{content:"\f1ad"}.fa-child:before{content:"\f1ae"}.fa-paw:before{content:"\f1b0"}.fa-spoon:before{content:"\f1b1"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-recycle:before{content:"\f1b8"}.fa-automobile:before,.fa-car:before{content:"\f1b9"}.fa-cab:before,.fa-taxi:before{content:"\f1ba"}.fa-tree:before{content:"\f1bb"}.fa-spotify:before{content:"\f1bc"}.fa-deviantart:before{content:"\f1bd"}.fa-soundcloud:before{content:"\f1be"}.fa-database:before{content:"\f1c0"}.fa-file-pdf-o:before{content:"\f1c1"}.fa-file-word-o:before{content:"\f1c2"}.fa-file-excel-o:before{content:"\f1c3"}.fa-file-powerpoint-o:before{content:"\f1c4"}.fa-file-photo-o:before,.fa-file-picture-o:before,.fa-file-image-o:before{content:"\f1c5"}.fa-file-zip-o:before,.fa-file-archive-o:before{content:"\f1c6"}.fa-file-sound-o:before,.fa-file-audio-o:before{content:"\f1c7"}.fa-file-movie-o:before,.fa-file-video-o:before{content:"\f1c8"}.fa-file-code-o:before{content:"\f1c9"}.fa-vine:before{content:"\f1ca"}.fa-codepen:before{content:"\f1cb"}.fa-jsfiddle:before{content:"\f1cc"}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-saver:before,.fa-support:before,.fa-life-ring:before{content:"\f1cd"}.fa-circle-o-notch:before{content:"\f1ce"}.fa-ra:before,.fa-resistance:before,.fa-rebel:before{content:"\f1d0"}.fa-ge:before,.fa-empire:before{content:"\f1d1"}.fa-git-square:before{content:"\f1d2"}.fa-git:before{content:"\f1d3"}.fa-y-combinator-square:before,.fa-yc-square:before,.fa-hacker-news:before{content:"\f1d4"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-qq:before{content:"\f1d6"}.fa-wechat:before,.fa-weixin:before{content:"\f1d7"}.fa-send:before,.fa-paper-plane:before{content:"\f1d8"}.fa-send-o:before,.fa-paper-plane-o:before{content:"\f1d9"}.fa-history:before{content:"\f1da"}.fa-circle-thin:before{content:"\f1db"}.fa-header:before{content:"\f1dc"}.fa-paragraph:before{content:"\f1dd"}.fa-sliders:before{content:"\f1de"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-bomb:before{content:"\f1e2"}.fa-soccer-ball-o:before,.fa-futbol-o:before{content:"\f1e3"}.fa-tty:before{content:"\f1e4"}.fa-binoculars:before{content:"\f1e5"}.fa-plug:before{content:"\f1e6"}.fa-slideshare:before{content:"\f1e7"}.fa-twitch:before{content:"\f1e8"}.fa-yelp:before{content:"\f1e9"}.fa-newspaper-o:before{content:"\f1ea"}.fa-wifi:before{content:"\f1eb"}.fa-calculator:before{content:"\f1ec"}.fa-paypal:before{content:"\f1ed"}.fa-google-wallet:before{content:"\f1ee"}.fa-cc-visa:before{content:"\f1f0"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-bell-slash:before{content:"\f1f6"}.fa-bell-slash-o:before{content:"\f1f7"}.fa-trash:before{content:"\f1f8"}.fa-copyright:before{content:"\f1f9"}.fa-at:before{content:"\f1fa"}.fa-eyedropper:before{content:"\f1fb"}.fa-paint-brush:before{content:"\f1fc"}.fa-birthday-cake:before{content:"\f1fd"}.fa-area-chart:before{content:"\f1fe"}.fa-pie-chart:before{content:"\f200"}.fa-line-chart:before{content:"\f201"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-bicycle:before{content:"\f206"}.fa-bus:before{content:"\f207"}.fa-ioxhost:before{content:"\f208"}.fa-angellist:before{content:"\f209"}.fa-cc:before{content:"\f20a"}.fa-shekel:before,.fa-sheqel:before,.fa-ils:before{content:"\f20b"}.fa-meanpath:before{content:"\f20c"}.fa-buysellads:before{content:"\f20d"}.fa-connectdevelop:before{content:"\f20e"}.fa-dashcube:before{content:"\f210"}.fa-forumbee:before{content:"\f211"}.fa-leanpub:before{content:"\f212"}.fa-sellsy:before{content:"\f213"}.fa-shirtsinbulk:before{content:"\f214"}.fa-simplybuilt:before{content:"\f215"}.fa-skyatlas:before{content:"\f216"}.fa-cart-plus:before{content:"\f217"}.fa-cart-arrow-down:before{content:"\f218"}.fa-diamond:before{content:"\f219"}.fa-ship:before{content:"\f21a"}.fa-user-secret:before{content:"\f21b"}.fa-motorcycle:before{content:"\f21c"}.fa-street-view:before{content:"\f21d"}.fa-heartbeat:before{content:"\f21e"}.fa-venus:before{content:"\f221"}.fa-mars:before{content:"\f222"}.fa-mercury:before{content:"\f223"}.fa-intersex:before,.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-venus-double:before{content:"\f226"}.fa-mars-double:before{content:"\f227"}.fa-venus-mars:before{content:"\f228"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-neuter:before{content:"\f22c"}.fa-genderless:before{content:"\f22d"}.fa-facebook-official:before{content:"\f230"}.fa-pinterest-p:before{content:"\f231"}.fa-whatsapp:before{content:"\f232"}.fa-server:before{content:"\f233"}.fa-user-plus:before{content:"\f234"}.fa-user-times:before{content:"\f235"}.fa-hotel:before,.fa-bed:before{content:"\f236"}.fa-viacoin:before{content:"\f237"}.fa-train:before{content:"\f238"}.fa-subway:before{content:"\f239"}.fa-medium:before{content:"\f23a"}.fa-yc:before,.fa-y-combinator:before{content:"\f23b"}.fa-optin-monster:before{content:"\f23c"}.fa-opencart:before{content:"\f23d"}.fa-expeditedssl:before{content:"\f23e"}.fa-battery-4:before,.fa-battery:before,.fa-battery-full:before{content:"\f240"}.fa-battery-3:before,.fa-battery-three-quarters:before{content:"\f241"}.fa-battery-2:before,.fa-battery-half:before{content:"\f242"}.fa-battery-1:before,.fa-battery-quarter:before{content:"\f243"}.fa-battery-0:before,.fa-battery-empty:before{content:"\f244"}.fa-mouse-pointer:before{content:"\f245"}.fa-i-cursor:before{content:"\f246"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-sticky-note:before{content:"\f249"}.fa-sticky-note-o:before{content:"\f24a"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-diners-club:before{content:"\f24c"}.fa-clone:before{content:"\f24d"}.fa-balance-scale:before{content:"\f24e"}.fa-hourglass-o:before{content:"\f250"}.fa-hourglass-1:before,.fa-hourglass-start:before{content:"\f251"}.fa-hourglass-2:before,.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-3:before,.fa-hourglass-end:before{content:"\f253"}.fa-hourglass:before{content:"\f254"}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:"\f255"}.fa-hand-stop-o:before,.fa-hand-paper-o:before{content:"\f256"}.fa-hand-scissors-o:before{content:"\f257"}.fa-hand-lizard-o:before{content:"\f258"}.fa-hand-spock-o:before{content:"\f259"}.fa-hand-pointer-o:before{content:"\f25a"}.fa-hand-peace-o:before{content:"\f25b"}.fa-trademark:before{content:"\f25c"}.fa-registered:before{content:"\f25d"}.fa-creative-commons:before{content:"\f25e"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-tripadvisor:before{content:"\f262"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-get-pocket:before{content:"\f265"}.fa-wikipedia-w:before{content:"\f266"}.fa-safari:before{content:"\f267"}.fa-chrome:before{content:"\f268"}.fa-firefox:before{content:"\f269"}.fa-opera:before{content:"\f26a"}.fa-internet-explorer:before{content:"\f26b"}.fa-tv:before,.fa-television:before{content:"\f26c"}.fa-contao:before{content:"\f26d"}.fa-500px:before{content:"\f26e"}.fa-amazon:before{content:"\f270"}.fa-calendar-plus-o:before{content:"\f271"}.fa-calendar-minus-o:before{content:"\f272"}.fa-calendar-times-o:before{content:"\f273"}.fa-calendar-check-o:before{content:"\f274"}.fa-industry:before{content:"\f275"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-map-o:before{content:"\f278"}.fa-map:before{content:"\f279"}.fa-commenting:before{content:"\f27a"}.fa-commenting-o:before{content:"\f27b"}.fa-houzz:before{content:"\f27c"}.fa-vimeo:before{content:"\f27d"}.fa-black-tie:before{content:"\f27e"}.fa-fonticons:before{content:"\f280"}.fa-reddit-alien:before{content:"\f281"}.fa-edge:before{content:"\f282"}.fa-credit-card-alt:before{content:"\f283"}.fa-codiepie:before{content:"\f284"}.fa-modx:before{content:"\f285"}.fa-fort-awesome:before{content:"\f286"}.fa-usb:before{content:"\f287"}.fa-product-hunt:before{content:"\f288"}.fa-mixcloud:before{content:"\f289"}.fa-scribd:before{content:"\f28a"}.fa-pause-circle:before{content:"\f28b"}.fa-pause-circle-o:before{content:"\f28c"}.fa-stop-circle:before{content:"\f28d"}.fa-stop-circle-o:before{content:"\f28e"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-hashtag:before{content:"\f292"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-percent:before{content:"\f295"}.fa-gitlab:before{content:"\f296"}.fa-wpbeginner:before{content:"\f297"}.fa-wpforms:before{content:"\f298"}.fa-envira:before{content:"\f299"}.fa-universal-access:before{content:"\f29a"}.fa-wheelchair-alt:before{content:"\f29b"}.fa-question-circle-o:before{content:"\f29c"}.fa-blind:before{content:"\f29d"}.fa-audio-description:before{content:"\f29e"}.fa-volume-control-phone:before{content:"\f2a0"}.fa-braille:before{content:"\f2a1"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asl-interpreting:before,.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-deafness:before,.fa-hard-of-hearing:before,.fa-deaf:before{content:"\f2a4"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-signing:before,.fa-sign-language:before{content:"\f2a7"}.fa-low-vision:before{content:"\f2a8"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-pied-piper:before{content:"\f2ae"}.fa-first-order:before{content:"\f2b0"}.fa-yoast:before{content:"\f2b1"}.fa-themeisle:before{content:"\f2b2"}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:"\f2b3"}.fa-fa:before,.fa-font-awesome:before{content:"\f2b4"}.fa-handshake-o:before{content:"\f2b5"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-o:before{content:"\f2b7"}.fa-linode:before{content:"\f2b8"}.fa-address-book:before{content:"\f2b9"}.fa-address-book-o:before{content:"\f2ba"}.fa-vcard:before,.fa-address-card:before{content:"\f2bb"}.fa-vcard-o:before,.fa-address-card-o:before{content:"\f2bc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-circle-o:before{content:"\f2be"}.fa-user-o:before{content:"\f2c0"}.fa-id-badge:before{content:"\f2c1"}.fa-drivers-license:before,.fa-id-card:before{content:"\f2c2"}.fa-drivers-license-o:before,.fa-id-card-o:before{content:"\f2c3"}.fa-quora:before{content:"\f2c4"}.fa-free-code-camp:before{content:"\f2c5"}.fa-telegram:before{content:"\f2c6"}.fa-thermometer-4:before,.fa-thermometer:before,.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-thermometer-2:before,.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:"\f2cb"}.fa-shower:before{content:"\f2cc"}.fa-bathtub:before,.fa-s15:before,.fa-bath:before{content:"\f2cd"}.fa-podcast:before{content:"\f2ce"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-times-rectangle:before,.fa-window-close:before{content:"\f2d3"}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:"\f2d4"}.fa-bandcamp:before{content:"\f2d5"}.fa-grav:before{content:"\f2d6"}.fa-etsy:before{content:"\f2d7"}.fa-imdb:before{content:"\f2d8"}.fa-ravelry:before{content:"\f2d9"}.fa-eercast:before{content:"\f2da"}.fa-microchip:before{content:"\f2db"}.fa-snowflake-o:before{content:"\f2dc"}.fa-superpowers:before{content:"\f2dd"}.fa-wpexplorer:before{content:"\f2de"}.fa-meetup:before{content:"\f2e0"}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0, 0, 0, 0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto} diff --git a/_site/site/public/font-awesome-4.7.0/fonts/FontAwesome.otf b/_site/site/public/font-awesome-4.7.0/fonts/FontAwesome.otf new file mode 100755 index 0000000000000000000000000000000000000000..401ec0f36e4f73b8efa40bd6f604fe80d286db70 GIT binary patch literal 134808 zcmbTed0Z368#p`*x!BDCB%zS7iCT}g-at@1S{090>rJgUas+}vf=M{#z9E1d;RZp( zTk)*csx3XW+FN?rySCrfT6=x96PQ4M&nDV$`+NU*-_Pr^*_qjA=9!u2oM&cT84zXq}B5k!$BD4Vu&?bM+1pscNs?|}TanB=Gw z>T*v6IVvN? z<7If|L2rZi0%KIN{&DZI4@2I75Kod~vRI*C@Lrk$zoRI`^F$Oyi5HuU*7@mriz!*p z<-;A`Xy{#P=sl02_dFc|Je%0lCgxR=#y~GBP(blD-RPP8(7$Z9zY}6%V9+^PV9-}S zeJrBBmiT&{^*|I7AO`uM0Hi@<&?Gbsg`hd;akL06LCaAD+KeKR9vM(F+JQ1r4k|#^ zs1dcJZgd2lM9-ss^cuQ?K0u$NAJA{;Pc%#+ibshkZ%Rq2DJ}Id^(YlWJx)DIMNpAc z5|u*jq{^s9s)OpGj#8(nv(yXJOVn%B73xFkTk0q37wW$hrbawy4?hpJ#{`cMkGUR8 zJl1$@@QCv;d1QK&dhGIO_1Npt2c7Ttc++FR<7`t1o^76cJ&$`{^t|GE>K)k3GNh{I92zC*(@N#&?yeeKjuZ6dlx1V>2carxUub+37cb#{GcawLQFW@Wryy^!4biE!Rvyz z1Ro2&68s>zBluk~A`}Rv!iR*c@Dbr8VURFXxJ0-?Xb@%!i-a}8CSkYmfbf{`wD2Y2 zHQ|TCuZ2Gd?+E`8Iz?iUS~N~HT@)&sEqYwENVHt^j3`EwC^CsML}j8zQLCs&bWn6u zbWZe&=$hzV(PyIXMgJ8IdI`P!y)<59y>wnnyw-WednI|Lc%^yedzE{&dmZ&U;dS2Y zC9k)=KJoh6>nE?fUc)p+Gqf+QqQ}#Z(Ua+EbTA!ChtYHBC+G$AVtOSVNypHsw2f|| z57Ecylk_F}HTnwuKK%v#9sN5!#306#5i&|f&5UPs%mQXL6UD?a$&8iBWb&C3W*5`Q zv@>1IKIR~ElsV0uWu9j)F|RV0nGcyynO~Sc#7N8&dy5s~(c*F9N5zxH)5SV*n0T&u zzW7P;)8bX)2=RLHX7M(0tk@t<5~ql*;tX-NIA2^QwuyI%8^q1xc5#<@ulRuYi1@hp zwD_F(g7_uz8{)Uc?~6Yae=7b${Ehf~@h$Nk@$ce$;z9ASgp!CPGKrr=CDBO6NhV2x zB{L+mB~M7gB}*jBBr7HBBpW4LCDD>N$##iRVwR*yvLv~ZLP@ElQc@#nl(b4ZC3__M zB!?u&Bqt@$NzO|yNnVz`E_qY(w&Z=uhmubvUr4@@d@s2rxg+^qa!)cS8J1E~zSK)9 zk@`rL(f}zd9W5OveN;MGI$f%hhDqm2=Svq!mr7Si*GSh%H%hlkqor}u?NX!EEKQSU zNpq!z(o$)qv_@JlZIZT0cT0Pu`=y7aebQ6Xv(gu&FG^pLz9GFTeMkC%^dspF>6g-P zrT>xsB>hGDhxAYBkaR@mArr`GnN;R0^OLD$8rc}xc-dpJDY770sBD((aoGadV%bvJ z3fUUjI@w0qR#~(xPPScUl$m8|vMgDytWZ`etCZEq>Sax`HrZ}jk8Ho}u&ht^oa~~k zU-p{pitJt4N3t8TFJ<4#{v-QI_KWNf*`Kl@*@(A?x4@hBmU{bo`+2LpHQr;q$9q5K zJ;gi7JIs5Y_Y&_F-p_b%_Kxx1?!Ci1!#mHr)Vtc-?%nR)<9*2cg!eh`7rkHie#`s1 z_YLoFynpom)%#EHVIQ6kPx>cKQ_h zRQS~TH2duK+2?cA=d{lYJ}>)R@p;$hBcCsPzVo^5^M}u%FY*=oN_~BO1AIsMPVk-L ztMi@Xo9LSspA==WB&S*uVl4V7bBsZ6Ow%WsQuJUl%vOsv%FNx7`s5UAW~xPRj!Q^N zwi+UnqRjDntAR@;SgfW*vp(6Brq42&k|Pt0u7@erYKn`qB*Yt|l44BpR&$iaU;sM- z4d^4IlC0K*WWCuG6&q_xHzvW8D|?VmP2oxsjM1iyl%%N4$e09kOp@NLPtiwN&H6aA z-eTa;a#fN{F^O?WQSqF~OEH*?dP|xqDK%Li3CQoKxK{5cQ&V=BV@$F7Xc#FxtWojs zXNfkM61h7$%AA;DPB2qoM4Ov7+011Nf%sPRE(aRk;t@!SiLC) z(4}(2HO9bnN2Nq^J%e^*xrU$#s~$RKF+`d5K(ClYZt5*oeM)3>R7_%elsPso3MS`4 z=E0Mj$&@IdAbalxm6OD4U#Myq|K@ z-&JTzbUk*Y0-^+{&H*ME<4mrECC04R8!ZMC(2?u*ebPc5H;tpCU=m%_jxw7~>F%j@ zrQFl$N~Wf`Uvh+X%>u^=z!V8t`pCG{q@?>vOLA0Fl0G9QDJnVY@1Ddb#95Q{QE_nz z(2-1F6PRS~8IxqP=wV8rtMRU$!gLw+F;Pi+V=Q2cGRB&cV@%1(K)mFrc%%OB*-1@# zFgILx%zA6OUJtY}rKE5z#efjS0T1cTZVdO+9M=22Ow*gK34rH*)?hLxWC7zvB>|5{ z#sH12*7O8mIkT%*9G`Hk>dLs;G!k%{O^NzUkTT2tE?TUH)Z}POWNL~_)Z7`ae_Ylj z(7?KJE)jQ&Hb*3o*rWtwBJh@*Xep@{0}KNAUT+2=21z$2x`_$+QVf~#34kTq)f2bC zy5teaYIF&ri#6S?KM*c=&h^$+?f%Ff49eYLDyV~)MBo$Pac=%%%@&IxHZ~dv3zK7v z)+Z&!aB~(1vu4#BfHILT-f*QjQFJ9zQ(O;j%x->){2xR8tH4$FUnM|M7YE+2!8H+| zWQx|On?W8yq%DaSP+~AC(dGnwTuhWj&oP~wvyCRJen%=uy)iDqm|)FJ(pxO9f_SqD zCJAN`7%eq6S|0`S9FuB|F{OY|rnuN6A;l5}g3RfWXkb3jsU|ZpPHK`V$znApB!a$$ zM&b>rphC>h6sWK0Bt38=XbW>{Od`+XNK_^W~`uM1%SkU{?CLrT| z*5rU5a4DAt4QsU|SYaF~z_MnbZd3}WFFoi`11Pc7q-YRfpk=(?HFGY!oON*L+>FN= zrpV-2sAV;nKn7Cumed63yhYD(iyLEHoL(PiGR3;=k4uAd$Ws$QzZ>JBRtl%)qmlt( zlrcu1tdC7hu*PwHfTp+Wtez}SISAlE3{#BBi@~MV=s9VU~oa*A29jU;4uHLv)t`=cj zMkBD=0}Gn;Kx|?3|5QxeB>h7H-63>M1rORUPw)_81!IgVnE33zbVFL~|4d{TmH>B{(ST?=mZBvFKDQ zs6e71u%5ZNZgM&lh)@6d3N{!aL268{00aWAef0lv1i^_}z`hyP% zyasc1UyCFdAscUwN{$1kE)jexW8Cx^)1woB65NEk+OUEqN;12DT?I)dX#Iaq$3L>1 z0{Z(M#~c61xyK|v7Q!EnR;&(y&k3ik}S zXTlwpYD`!>eg3q#=~2@ogTnwcEEv)N8U~)gNue|5Zu9Vhq$UQ zm=4KMxM#pU6K(*VJ`HXtpAMkY0d#r@+&Z`cZaTnC2e|2O?BUZ~t%L(~5I_e3bPzxX z0dx>R2LW^tKnFpq!O&_jzy$+bFu(=7JFw8*!oumUh8A)!p+c~``Gq=nX{h@Ft%X3% z5Wo-u7(xI;2v-IbLfjP=0TLY`(Lp;p0M!Ag4nTDPssm6Rfa;(#p#T>OaG?Mf3UHzB z&MfAN0W@?*-1IoE7(i!0*$e=k0iZLWYz8zr1Dc!>3NSJ7geGSI+)RL*32;EO5TIEI z&@2RK76LR20h)yX%|d1ZTo}NG0UQu4Bn;rfLgIqB84nAECszh=Krr33X>d=6I|%Mz zxI^I9!5s?s47g{)9hRo&)&V*omkuiHfLuBtmk!9K19ItrTsk0^ZaOp=1PulO91uze zgwg?_bU-K_5K0Gx(gC4#Kqws$N(Y3}0ikq2C>;pDE*Ri~0WKKefIhllfC~Y*5P%B- zI3SA-$f5(X=zuIbAd3#jq6+~y9l!xibU+gw&_o9`(E&|#KocF%L`hz;)DWmLP3;5fv}-Kn^2%lD9|PpXcG#w z2?g4O0&PNpHlaY9P@qjH&?XdU6AH8m1=@rHZ9;)Ip+K8ZpiO9yi^YTHyZbQTB``tr zgIpb(AMAd(*f?muyEF4$ViPofhWp)2_v3ym^WC`x?nk)$vC#ck*h}=pfDBO)G+>I#QjVRoW zDBO)G+>I#QjVRoWDBO)G+>I#QjVRoWDBO)G+>OYsYl7UmCTO7>(Ly((g>FP{jT5xc zjcB18(Ly((g>FO(-G~;t5iN8hTIfc!(2Z!3d+HXsN3_U|XptMyA~&K%?h!3=BU%JB z4s&B!kI%_aQR>IrR=x#+$+m z;mzdD<1ON?aK+rWLd3m{XXDlKF7tlj5kBJc_#(bPKaf9_AIz`iH}m)K`}oiCFYx>M zm-%n=-{;@vV?KeH`Llwpf*3)(AW4u1G4l#RpWvL}qTr5jrf`mMv2dxdS=b@mD?BVb zC463ZN%*qxvhY3O_rhO=4pE>e9OBP801EGXWnOSFyAwG zTv6*$;wj=_@l5eN@nZ2Zh*qaSY`R=r4N>V1@qY0M@g?y!@q6OWAO?L){EI{=882BR ziIpTnM7d02lhi{L`JCic$vcvdC7(mg_&<_gB)>zHn1$%@bchNskS>9k@H5g)QoS@! z+A2K_vEG-ZuS?&8IPWLY-yx#=u>zUPB{q&{POCP9RCmd^r+u&(rp@QL@y@~QS|_v!Z8?{m!OIiHIVSH0@lOL9!ke`vC zm%k`~TmGs1M>&>{C?twN#iNRuig}8ainWUMip`2>g+Y;`$W@dm8Wf$1Ud1uRDa8fF z%Zkg2w-oOyK2dzBxT(0M_(gG7NhzgDwQ`Jdsxm}5Tls`?vGQr%R{`icA`e!hMW`33q-@SEfp919`B@V$_Hqg<(g&v8BX9I=vHqtmmC?CQiTI)~<@i|)VblQ3H8$=5wV+lKpUN(tkX3=CokeSoksl^f7X+{TA zIF)6dh2AY2%Q6!H89e$99_(Y*(NEJ_CXL1~&@gHZ!{tKhI3Nu-(Ha=IyBUSBv$eHT zgB60#)|^Z&R`8NoCM!ETi&2iFnc+MaF`j>W($I9M|{Fdn9I0?i2Fo&$U{Z$8c3Z@s||tuw%~3Wi@-Qn;%~T~t_BQle$H z(%4@xz~aD7*k|q?4X(!xeC$IzBLc~&skAbfW@1}K{oBs2(=e?$os8k2kr~4h zJ2O0>T)++~{L*NRd_Vq^9U6!SiC8JPP*C~V5;d_4fTOkv@S@>s{2b%v$CGe8J!BW$ zWJe|m8oOG%dsIDzy=8keLkF>xe{|R014mR+Y`{OWCs<;@^T<4GVD_^hV!}nQuYO;{ z5XCB*xT4s7O{^guzsd)gfXJQqzy2L25&H1IC#;IT7k4stQAl`4B!EN5{B z%pdSc|Jk$sj4=3m_)QJ7aLt;9j9?+l;Lq7qmdS+Ivq3g^vuWr9Ori3g?wip|f$O8$ zKoRc7K@j_H<&QM^hJ3>(Z90(msVr_2V938oGun{|A+`@ijA8@%`OHKb zX4RUNno+1Fsm@K#$_0FLSyEoIDzhc4IalLA zb%1SMvT*GQkdEyv6C56npQmv*NZ^3*=Jo3^6G|OS!ffJ!A0cyp)U<7ESpTewESXBe z$ZR6j5FVLIBA1gywK2K6+Nce~K6us!{FM628+DDZYQJ1{Yuj%-_7@*4Jyh0S(blr7 zQ-nqAuHCuK`7N>MB2OiJDPqjMF*dWAQ9BcC&ID(IiorKn=&gOoj_sZd&SY^p4GIN6 z$ujr8`Q{!onZ=4VG(+JDv?mkDM~vf;4L=7e7Nj%+!^8^nu>vGj-o{J^t(iXu^z1a6 z0mZ>6lSYiTBz1Onc}b2oGRqXbRTVgdgMEsSh7)?(We#mOJJ+mOJP0 z(|Qi(A6B=uRoAs@&vhI)^SmmM?4jyV%qZQ#(?JiOp< zO{!&p^j-9@LQu~-JXr0BLP+N0wPX}7F42$#vX!5n)@nGY9y%j9*xJ{XrX>k@D<2ov z;k9@ap064LgRzKg!4DG~FhVD&S$f$cv~yq~%`67qSK?$420t)W6Gjt0(Gb6%U_j&E zc%%E!0Zp~w;f&=Ih*)jhQCFX?&9BMdRk$mb@co-hTT9zZMTPrL6hE)Vh1dg|@K!K* zTZoNO{z3a$X(ofl(}7b#UtVCzXvSV&Z`U&KzyA9B4F4p{ELy#Kk(SYcNpULjSf-&I zC$NOGes#q~y9(8uDPS^NbFd%F(Htv)nK+TfCuw38tlM_BUwZ`qLE~4!4&lS}a0Gsy z)i@LaJOb1^3B(c{rnOE5SBkCp2Rcz0O>36T0c(Z(aF&Ay)hz3moP-^ynaT#zZENX=Dem$rBj#FkIX-f$24$w)OS~yvH)( z;A7l3ngKsZp>)h9ckmtOY_fr@okIf1XkZJh%-n6NwH5?e3U*p|sN8HWU{vQg zCL+RkEEHe`i*@)@mf6%Uu+exiEpRDX8aihIL)OnReaLhgw+fiIp;iYz59ArZ1N^$W z8he9^5ti4N)s@r@Zyem{Z|+Sm1c_1NM_Js=uBDk{aG(Y}0$W-k%aA^j1y>(PYAw(T z+zKnO1%98!@D$>A;fbvRM)^KWHGP|@VZn;bpoa!(Sl4WS1|n(q!%|jb6E0=7PP@Zy zghoFgO>licKEUwAAHdZF*9VMpB6Jp?IRcHAdma(6LTQ!$uG!tPgz^r867LH@VA>{RgLukD%WQ6OsZCj^x4qz~8LrOebNhkr? zhA-l$aTnNsJcl$2$S9Iwjw&rKE3POGC>Jna&>Jp23*GpIQ^=f)f@R}>BQhZ34VuY? zuC(OB3vdOMU^W>c_GFn)xdG!Q_8Z-3M%jIh-&wc2wL|T=E9h*@$t=;PE#qgFWaMP2 zop%M91+ATRTE++?hk@I073jMNb_UCs&9<0cGt&Zt&uwAA!5GR1s|QvN61bM;yqFCe zz`4P-q;?feYH=;olG|l#X$fGIj>qtqNu8Y&vpO-(hm zc5O#vb9>EhY+ptD@9Hhso7N_RG2mP_3t9*N6mMs3^hANHvM2Ut83!nEPIqgioI}Ap z1!jzd;1ZSz)l6Zhy;JQJHyHgbL5aKZA zb(hGdvC@4#?Ry)wjXk9YGCG;OyqzUk>a3l0&3WL4tcPibPCGDuVP>#WUrwqV58>0~87#&v_za1|68Z4FK;8kSI~i6PbuJ&@4!#2{Vqkt@6*CBW zq^@pPT}^!eGrVzlV@XL_NqKPqQ_g}FCW-|#)7xu1ZSDo{#df;4m&vN%*__AV_vnc< ztWQ9f&-r{KOo>#5r5CZsjn6eVW?h8olB$@4yBkiYA0i8Ii+|h6)AqA!ybzBiW646s z&sK&@$s>5K20Z3KVyGY+Z7N$isbziwvcf!l0qZni2*D?ux8bmZ{_kk7Z*FE>ejwv4 zbdHCs&{^n!r=t+A@o*I~+Qz*6`kiWWejWLhq>&kaPQ)SF!4UxyB<#v;-jSl>Gy!K9 z_c!nB>ePHEWR}vf9AoeXS}I(AX~Ua%53qTT!;@|Wis8qh2iyWg3#%=of#GLn7MRT{ zbECO46BI#;)taIiFG#WW?AHQuh+RiB*5cfVZ=^pjXXMwjsOc zkew0cLXVfj0@@R=uF#&k)P3!ms3YH}Sa6as z-+zA+GXolCB%%>8a~>xQfqOv4<#Gf8qw+ZQUkE=Sl(6)xtKZdNR{`&U2{nTY%Z=Gy zQU@?kaW+rLjjCYpK2>ky-cG170gvZ*bTZ5S3j(38Pj8ECkL-!*sp+ZT(;%wrtK`(y z01g4q*A56nU{!-dJel_Py5?r>pr_+!zTJ*f@D^OGV%D(a3?88IT_J;)u-qaoyN@E#8N z^ERHLWduYvems$BhX*iN))}m0fC1Zjm{SewU=_fC!sS8&%w(Ed<}e?+tO*DVTnibc zjb?5OCxLy>IcnXjVQj0odcrtYOZ@ACHWTkB^Kz9)IrK@#E)UG?-_@ zyb8?I6c$t!s-r5ImuYEjb4^RDid!giOzq+bATcBw*$R$JIHO+5-eYcF4-aNs#yc&Z9}$OTab3Op!K zsi#?r5kN3(ctA*k8KJ|2W*Y1@b#+WBhy@XXJaSCQxr>XI5JASqMq`;Kld-bAz#$00 ztpcFt_QsBe-J-5)tZZ$AWh9Fys_?{Bn4R>8<~U#wLVSWzwKg=i)@Xj{dgtn?uS85y zNkc=G_ASRGep6Lr12>{F&gJADOr+tAHu+dj#*69~_v}8z2!d$r2jgt0YpT~ab=W(b zJ47G74Bb=05~M-RRIo}0>@4_3J@h$l%(1K^1eme4Lj_D}-_=l8r>SE?z=CZ86S8e& zIUj#3z}tqF^W95v5&=;zj_qMSouCH^rw1L}n$iK99dvpj=Sq}-Dj0CFsFSua$FYND zPO;olnE~&00?SOH$8oJ(gUJSmPspUu-~}@~tUIj*+5$_hX?G^01!GoJsIuU3WGsOG zeQ|v1iw{E-Ah;}8oko^b*A#PdasuQbgi|n#U^C0)=GoF(@|bS?1w>+UwkN0(S{Y$D zjA$O7#}Jli^7AV*8gm0cg@;4M8|<=lUq&}-bjUY<-uw33dw(+NiCU5+%q}j@)-ak$ zV^=|)i7GM?C@UchsS@NB+89kuQDJqV8u;ga?>H6f4(GwZl=v*SS`x%#fq>y#dXDBC zQ-e)v&&jOPGW^b}cJMHP-VQ#;_zG|&m|oztI3heD0H^c?uuv@gfh7oFhvfqi-60R*koEXQCOtVrdnj{zmqE>_i9bPb`GX62 z%G49LQ6IZ8mJvQn#{n`8INIQ-m3v0MgE_nfH^4OB@{rAN`_R8NF9v=C!@fh5W57ik%-Mi>^{T} zAofqh{)IFXkmhluc?M}pk>(20Qb_wa(#9a|5E``xjrtsoo`yz$h{jApW459(SJ1=L z(8JwmtQd{mfyRE0#@D3Q85wBC1vJxu!iLbSwP*{{<~*LE-IaVGUYz04?rEOYWd2m!c<6qo?@jsR*<}jaD?G6O-_{*1Urv_MvB%pml+0-2t@jI9m56dX`1&r=tz)(Z<)&rip0N z%V={r+TxA2^rJ0KwAGFxC!)wO6uAUNnowi|iu?dYeupA|N0EP_ZFMNhA4M%e(V-~% zB^3P~idltXE~D59DE0=@uRw82P+SL!yMy8%NAaH_Lpd_MixMWIgnX3n9ojw$ZNGsM z(^1kml+=onXQ1RRl>7!t{uLR=BI9giT#1Y^$XJYwmyq!-Wc&=7#voHYGQEaUSd=mz zr96&O)}tL1+CifoImrAJGS?%^Ok|mbEOU^h8d<(XmLX)VM5&c1Z4OF*3Z)xR`T)vU zf->GgnWIo<5y~2mc7~#zsc7f(C|irN3sLq*DCb3#%SX9wDEBv%>qL3aq5N=^-+}T! zK?OdjU^yx%K?S!^VHhg%Mn&PMC>s^EqoT8@I0zNjppu!WWF0Emg-U)!rK?bBIV$r) zWihDiYgDd4V8{4#1uMy)hzZ9r`lYF~xgO{l#ab@ZdokJ0YwXm=&r zeFJqphPpCP*Bhw27InXa_PmAmhoA#-=-?D|$P*oU5*_*o9af{m&!8il(UITK(dp>u zPw3bW==d&l!UvtWicU^IC&SUnbae7CI{7?0wF#XXM5mucr@PUa{ph)JbXJ7UJ%Y}) zq32oj{2g>Y8l8U^z3?`=a2#EnjV^wUE-BEZqv*w@sDCGV`8;}c3VPiez21r5SdHE| zhAzjU%YEp|W9Z5!=*=tWYCF2tjNYn1Z&#tWucCJX&^y`a-EHXIBj|&T=z~r)@CX`s z1%0>_efSdkh(aIzfK(Dxss|NMo1u%aJ6M?c1+A06nYN$97~(e0z?XMgl_8M?Cr z-T4;%`ULv*F8b{&^t%cDu?78CgYHg8gHebqrBFBpTm7Eh6pu&oj!^t*6#son@FgXT zr-U~tQ3WOHr9@v*USlbUQ`6s4%nFKWqQotfWHBY3LU{*JJ_5=olk(j``F=<#Kc)Oa zD8KKhhlVKsbCjxyQct7;HB{hoDzJ@W=TMpwO1q01b(R|aI5qkkYRqhEjDZ^SCH1hJ zdbo-j8%>Rir^YX&#@A631k{9TYQkx1!e`WkFQ^G$QI7;tk6fZ2y+l1WhI(u-HL;PJ z_$4*z32IUbHR&uhc`-Hl87ky)D&!!g%cXR`QK3RAl%+z0snEx%&{}GS7d3MX71lz9 zy-m%UOwC?Q&Hj;^6GqJ;)Z7Ww+|AV7R%-4`)Z>2C6C0>`YpD6}Q420m3l-F&`PAYo z)RIc-$w#Osd#I=Q)KkgSvL)2hfz;EVP|LScD>hOqFHx&9sMYhRHBxHrIBIPYwe~M+ z-4W{9)71J|)cQ5l`hC>;@2CwTYQq+4!w1yHd}`y%)TW8lCL^`!3bi?w+FVC%iKn)1 zptk-%MFvrkH>qtpYTGp`Y7Z6l3l+0~iuI&oXH&7yQn6`NY&)eNO~v_BaX(P;CMy1I z%CLemyh0@;QrqWI+drieuTx21P|1aqv5PWwQz=erhk-KJQr7cSY9f`kfl7~~GJdAA z)=@jnRCXbiGnL8}P`S@jc|}ydlPWkt6+c52S5w6!RB0+zrlraiRK=TAivl7{e^0k;pVIJl=A~4Sr zmb^S=Ab*r20=5#I5klDC;VB10R?)*D;Aab@fkPikN5!xh;yZTFK>k%nmXhqoQ!w0D z`nqozt^_Q@9)>G(x>pzi$Zj&3k1q>vKz!ymnp_qFm9B;FD#iR^J1oBn=phB{wUU8ByI>H$ zx8!$q^&C71XwoQrfyNoM=PID%C?&UCEhwxkFVqYV5Ia96*Ay3}8rg(L(}Np?fUSV< zJO&x*C>!j`DNaJG(1B7|a?Yb+Ls8lddmB)K6#yE|o@S4?6&lz_NK%B zkq5-McvwqBqNhLl@$vtvtKdW3|Ni*N)sM7Ti$$=S=i!I3M{ifpp6J)(lYyQ1kItoa2CREud1?qW}t zM4Dkg^u(WZ_eR(ZM4m(7XDhLZ?W2K;DP&7Sv38K>`~~8??IrDMDYinNha}2FiOrT> z8fWDINp)=E?=H;RV^ycIj%P?dzqq-zv{ikudG9{VMbCj6I~)g<*PUTb3Et$Cl1&4S zF!BbzGapVPj0g@yT%AR8J2pNGeYam|7_VzY*!nqQF95f6X_??}N zy}c^XE;S%19?&dkI$yl~L4z+~*L5H4Us%Ws+y(Fdhs9L_Wq|Ns$Xsne`9HBgz|0BS zI@STA#{FWu!U-$<>onnZrtTk~;dZTr?qf9E#+Bd{t+{3f-o#en+%_)cTwCLKgmtMA7k=EzdSd(S4Zx%j-keF30X!bM3MnU- z8j66_NCc!Hx&=wlHNVnQJ)A2URP3aIH7R9BUVB!JhAcZ!a5U#=){%f?FPu1c?7XP9 zzNX%;g3X%JI!)9Yi{4y!QB+r42wTR5h2^k^M8=FVwk0x#IF2}DiCZ?|Z$P`9YMsJ2-1-0Jt2 z_iqvv*W1hNYCD9#;9S?}KM!Uf$~#;TaDY6`&#G?E?Nnnk?C&(U@6xtku6wKg%HhVt zEeG4Mh9EFTT+L%xjVB!0tF3bl7)na&HF3|!pG&ydez5sa(-FM{#m`cG+2uf29T+j|ZIiwhQQaBtkbmc4h zV*1L{>(re1uZ-E4u3bcC^U0g_kh{yHmH{o!S;O6yP*aK?eR8GlIrLf!WX=NQ} zl-0KC%4&`Cy2I$a?lkf%Dk~~fPAeR#xB?(fU;`Fg9OsoyEfw9lO~izk`a33NvE*4H zDaYHQ`j*(D3<1M2&fB^96=_Ym0dLN)Eomrgs0^@IHq_MD4nFDl(0}kr=ZE~#y84O+ z*T#55Rl}~@x;H=cmzD$PU^(bJoKBC1kexsZf?x%YLg6^$J~snT1>~(@NrtTWEt=dV zRujbWz^k~ed>8_3pfCq;1O%)v1quT_hi*GgD0fz6=Vhx&xga~cxxGreOSl(62#Z(X zA$BiBT+4)mHfOx@bpGk=;~J-K=pethAZ1UAn*0C&Z6t!9S(Tdu{5MOGncLb~rEP=Q zA4JN25TvA}nhUf}-N-?Hc6@$JjLO&$c~UbNA;^NWaaGzbFvNhS7h358Tb@~!1DmVx z_GH7kgD!P2M1wlDgH!Yx?Ti(0x{x0qw<&$Sdi|!Z<8fM|#({jN9*5Fk5_<})?K|KU zmm@-em$A+WVi)4C;e?7a!XImBM}#9{cW3Q^g1rIK4463J7MLW(%%QuEyEkF00SI&# ztib=vkwqK_V2*(>_Fql>G5CnGwz<5euo0wxz#mR_)WCtYqVkerExAsv^Gk}k5axK; zxQifne+6VXLfF#W&|Iq}e>l3s*zU9;pvZUhPy=xAB$!U%%Sjj>?+L1FtLmz2vB6R7 zKe%3i4bI}~(yEf`(g3_6S$RCaKj)Z+6gn>QkLJYeGpK>p4KX{m=V(cx^CCYdA%9)G z%9#ec&S$|3=!WwSJ$c>fO&aGJJdn|Bwx#C>r03)dc5? zAQ0>a{PHX8IojnXR?+w>n0uP|5v4zdlM-a@4YEOv+h{nRk@Oqv3y#+|w%B&(H3302 zFb9P-psFeh%SwwyME)q55Ke;Ccr1+{!rmJ~ZfWK3!4VwLFF=?C4hb%2TVh3I(i9Rll`K}nIa8lYHz#W$V$QxpPX|K7v9$=H{JrZm zcO;b$JTV5ZejGomcJT4@usihU*V?LTTTQj97t{otb%O!$v5Jf#YdC#@z-MFdPg<_)c3024Z7yxZ zX{0cYR~4RM2kwqx@c?f$?fNN&-YH+?3Lg9@h7}K-&Vd2f-t!U`HWFZyYv51X39AI~ zBX9(T6FB=2;R#CsyAn7C`_jOmcwiy~)DvNo8CR06cq{ZBo^VydlqG%zmI)R-aLjT5 z$dyKK>5V>R)dUhLoL@E5fxJJ2r+RwNoQHE^{mbI%NHP~hYPvefSlepSzD2Y|_7Y@a zY9_B;Mtrq9a*a8bouZ7Kyex}qI7>K%ZEmcoYtnoOJ5IB&!x3QPO*ozPv>IsY^U4*> z*B)%^X+5Emg1U4M0T>=S!tD|Oe|w&02Q^B^RHqOA)%h%3KIB*DR6=!)KK+QMYa?F1 zolmHPzs$mnI&mQlCiH1I%`|c5y19|sCC&VdHw&)4qr$J?mv9HZ1=mZYgS_%&!Lp3y znk9MsPa|jcPgEZfcCbf;nEB;%OdZtXwv~GsC3X${ug9SJyOXFjR#4I8w#6b(t)~he;onKx4+XoqKb%twrsn zZAAyN4`l6wgH|(%)(tK@K4CK-GAA#%E)mvA&e}}LB zbPKXq<#~VgU-fe&x{oiW!Qm^{3D50t!n3=}wnu%nO4-cj7ufO(*=D<~Nqwt`5sRB&PuCXhsj@dTi<<52H7)AFK>?QUJBFvcpvC)#G_5a`ys+bV zK%Y6Pd$W4DT9B1hT9&1)sv+{@MTCu79+c&8kM9}+SLzF>e;nb^MU4(oR}p)R0Md691%r!J&2P;SdP_oLMFu6B05;>kLWc4)lfKS#W5?wI%|hoq`hu zfx>*xp@_k|@M(qn0}BG5U2uozAAEj+p&UwrwSy6k5G4?GJvc;fo9Di~NbR%>7R`O; zDYJGxI8E>dA7Mun!eUxuWd+Mv?U2Gj!*NnrXHTVJbU#n}+OZll+_5Y9iNS;+y;7d? z0U39NOnr$=5>;koRA#6jd8DT55v}v3;fIx1->hl6s;zGAs%wRSh*vrmsjKW&cDt&} zw!3n-W=#W`Q1glEkfXx}Qs8t(5j3uAvN51y4j&X3@w_#tyW_a0#W72@XmpdFU zwJ9yH+wscx?pEEqr)oTK)^?2gpr4CX53 zcPo2r+|^&z-!C2~cl=iL+i$A+vuEqhsqt()|4CRs?j#ddlj!)ks=9cs^W=y`S&tXv zr`qw7n>R~ts_}XJHWt7kx;Qcy=3~uSSTJ3~f$!iYD%?V7I(K0-txXmcqySZXyRjTUA+J_CRG|P7^tz5RVVzNI33P*p{0cvi@F5gCc zd9^pcZTn6w?|%2a%F6e&m9M>#@!Fp5nmy`T)iJ zi=lMC;hb$h#99HCFYoKypK~Bm9XMDJ$omVwLyP3QFYmJ9%@>Y}x)1)@aYEgJAF9c2 z)i&ppg=eaWmym3&;~XW`(=}vo>PGl*;8;06R*8>kPqf&4t^!sXg3 zyyb<%qV~NwZ_jfNI?$F?O!A_$YqN7y!S&8$^IAY1T7g3=@eIwg!b&{JjXj_hEbf?M zEK@gLs48#JHgOB#!m5g1=*G$8(2d;8w4Btc06Xa<-6fg9;ABVdud~@CVJga}S!k|L*VRApay+;r@@byUz821q4~J zRS758;d>ePZy(nsI9jUgbCvnt|COeLwHvZ3H`A^ILubet?!ZuCk*cVsu&zYI9sA)v zGJ-=ekJDBN!^g7eup%3bP`Z!i!?_^tiz8UTLA=U2kV(7FZo5idXSW0S-A-#P3w{Nj z#x1Ip`*!wN8(l|0ir~;uNp7CjIl(!ekHdtIfqrddhhbmhzSf3??|2r^5;`V0C-8G2 zp!+swo#B{R1cZqcz)f(j2>j7O#ZZKi9kN3h(-{K00(PezY(t3a>=TKwvclWo?6?j! zLbP4j$>Kxc+4nnyU_25bKx%^sscYZxnb-e+vHdADl<>_>P5x zpDIf#N=i#L&Qs1){L)g$sB;VLEp^p(wY6HuDaR>(Z7pQfE%w4(?KAKd+3>*d0H5oW zaByI7fRDQ{d__>kl02Nt-)q_4nxIbDo@23U$t)7a?PuUwaDneIoL36}2_&4tfiFUa zAn?UGti?3u(<|zq-WQ>9P{VEf$gcA#7t|Nd??2bAb)dmE{=Qf0uU=8XY8@)wR>FsN zBLfiN2Ty$z&FzfXNgk*?ya#4VzDi!pZ9pg?WGC|4Kv;H%(9q*lmdqijRqPr8-i7{#0a<#Ka z5A34sT|ZkS-?m|P(&X__ha89P75E+j!zU9`_u}vNP>7p&4*P8`_~JPv#&?x#Z%=$x z0Jaepk7N=bf8zK}X)mnIE-WN}kU#tj3$rT=?S=NLHaPY82mZs~Zf~oy7m7Y}{zutT z)Rb4N$*aw+C@5IA%paJys7M9+aXkw`skXL?vNq5S%{6xW#f$#%HDzN(Q$=I3y>OSP zBQB;P24VoK*@;6T%HfdV5IzCM6%K|BhVbz;JWYAxgze3^6Pz33A9rH8EiP{ARDVt& ze)xgU1z#1V^kEjq555e8fJoOlWlN#ED>-F_g*&q|bJGh&`6b2qc`BH$^(^KI>T0X2 zYqckPp6|K@8%Z@yE$yn#?AHIo*qgvNRqXBKAkAX*;*td0q&cU`A_^i%0XJ5GB4sD+ zTiIy~rL^h3rEQvKY11T4_kE*4Tb5E4WZwiS2x8q)@hYHl-79m_N%8kgTD;!(zVGM% zH_{|0=ggTi=giD^d7ftyIjhwQxcS3R(fs)ulJ3q{k{2{UIQbT(B{>tpbN^YU_X^7vwhtHfNgl_b`YXRm)J{q|E5@CJ!g zqd#cHJIZvm>6|Iw1xR~&nWMOfhfi_;Qix(^97Aj)aHo)eB0q#H`mMKdbF;H^vRQ=2 zVBmv;+4#Vk*eU5@l*vE&JE!cgMz`2(7MnVsF%yp-?P++w|7v-X+Z(?wB z-|(ho*6{Fdb+_7=mXWfauYL@R9v*I8))ek1Oz})<3O{CTYVvcRcApmYC*Nz_E(~^$ zU|>Zo0g)MC>L1gzAaWu@9)-GGxE>E)aEz{EsPn)r19p)FYIyX81`QdH4=8}eMqssG zKt5B9(1>>n`XOm!@tl5Ln;C+#%^Q^l^1Zruv%mNQQm=6@C$X9~_U5k%z%Qh~zgP@= zf8qV#7|8q=jh`EDqWY*R*It!(U)Wpz{^Cbrw~Eq`h1eqeq1;n$ZQNS!-*wd;>$|l) zDtU{Fe5u(|pS-7>Llm54^d@bVd0by(#215ydrtv#`~HSdS??add23-sB}j>^dpU_i z)o{WWG=7XhBkEz$V7tGJT?ZmnuKWA7vEBVKTwptE)qaPlMA^oo@F=7|O%asHB0bQr zL^!34igLy6RU;+0*Hu*?#j}#raf#{v^dHJka0F;f@C*j~i)ZyEBf6^L8sz)?e83)T zib2jdUDKV|o#^|E#?9V(Xh&@H^TiIHMxoJHz#q~55^kb^uG{XX+2P%Z?nE4pA@gM% zE;M=?eLeVt_9fWVAamn)*s==J0r#r|L%H`I=RZmGGWI}-BQ?155^{-Q_FUpE>~WER zfyj83q@x|f<#GgI*ulLAbz`R<9ws@3$D?FhQzcqZqz7IT3RC6rJ=8r z*C}53n#6Fmi40de>LwDBhH?;3oQ!xvy!#OBQ)FOl6lXa$-n`ectPr*v zko3-Sb$L14c5{@dD9xFes7f>>;gswwY&W(sDNzLyL@esgShSB@J2moZf02*-O+qxD zgPwz|a;Qy`w>C(P-NUJSh%oHbw{DWzG7?K;h2g?5e7wa@XvpnGEm>>I`mp3k^LRWDvH1T?jtan@DV9 z6B+cTl=jWjkiHT!D1_j!H|Zd3c@Rl)q{aGS>LAfbOpv zKRSdAA!3;yTFATI`*{c*atr;zyNPPpM{M~62e22_;1iA#k#G`>6bB1-=eswvzBTw) z*0UOEqc44$JdOT5crfc%NOLyGgqMYvMdZmBaRfS-uIp2wzYL>Rfcpt0Jq_p242pl> z!OdsJaBibJOLTf{(-7KMbuWpYP%ivB>{rrHMNWZcWd?(%-)~{_zvhH3o)t=AJSeU| zGO{a3uRnUmdnSPN`XeK~{wPe~py3c4*S8(vSD+aXGq|$){A*k{V!4OOVNqRONpp(| z^nmC(ZqkRar^0*fsc62N@8(205-SU<)p2gVJAho4ee|)YuJ-;BwH!T6-WDNu^1-3= zSNNXuU>rV)D>{j+LQ86MbS>A-yZQTeT6juyG(TyQC|XB;(1g|LIC7Z2Eka#hTRk_3 z4IM#;=6=9ZHS{n&EQ)65u8ZbAnk3TIHG!*zz>wQpT3syr-n-TJnUZu9im%`Y_HcdF}k_D~uF=<@})!5YYhonVs3Y zQyu@&N21!gk|uVpN&cetzs?2A9p{>aU+>$WI@q7M!)T0NG!HYuk--+#>Uu3yT{J%# zSMI&0p7s>!*lBt$Du7w6z=;4~fYCOrUlNOZ?b9&!&kH?^7D+El_0vhPdbHBfaiYJY$^ zPrx*ddC;9L=n6IN8h2-ztUs0bi*EHT#vj~fim4&Iq$)n`ar+=o8&X~P@`35|dVDcl=B09QZcH;~+ee~(4 z5nb2_2K20<$h;5I++h%^t_}vFLfRHi8t&XzCWgrnWXO{|Ka-B5uX8I_uUWBtjWjJa z#gKqd|E|3i&XS^Hp5&7x5>JMbyJ|Lj3NEr-d1Dj0g=k#l%B5Nk`4L~wjL+!WASvDd z9Cgq*dQG*(w#5<3<;68D&X`Y^zdTSC>&$W`a;tV$ZoT-=^CaY$`rw^eNk{mtw|+{x zqb9@2u!C2Knnz@vBP+@3cG4~_Zg*a4XJK||cz9_&G!VKYj5^r^nLyWy!bIQIsU)`m zi+PRiB62RrV#*QinX`AqG@9?xhI-^GdW-1kYh)LdbC#SuizxiUmhavt`GU4ZkOM}A zd)Vbe2K5!RWDrs@7!!~{nMilhS@c6S{SbxDBG|zH03z1_gjhy?E?plKJN{Mhp2<#G z?5FF|HAlVz0{!DZ(5I!{8{lp2h>6)j#m_y5nPipB{Vn{}`b=aPIdU3>-Xv=&QBy*1 z(zO^*XYpyVnL1GK@FSGC`>P}yi|G&XXy*<%rr$(M-)Cg2>Eprs0B zgP}ULhGSvB$H-&!(JyCFA73IG|HF_EF@TJuMo2JBqi;n`roO(IS86e_#gL_Z>!H@8 zdyY$sYn;^$Xc;yJ5QPaYFB!wScmle3N^ci0DTRmtx;I@QF$*$fswFwSw}%%L^NGSL zk;7Ktw6h-W=rA2rxJ}JsEo2(`^;xzoQXOSe&z+O2(s^lACr_J|8YRvA) z%+D^c_~lq34}eGvf9DQ(R-k73G1^!WUQHf5JHTc3v)BO4P&=Kud3GS`?iA$Pi%ms- zG|)W@f!#58?zEG@;C8?M0VWw~YlmG73RocNJRxgpZ-V6&h@XKj@_t5Wzb_I|&6@TB zWWTH%dnqyEwE?7v4INC$2q+Rf|JXy&cI%XEC#~E2-t)a#bN`^8eKD?Ug7r9WhpZip zMi9^3y6(RU?I~-&423siei3y4bLanCkf|CqXB26Z#yz6zpprZ_gg)^lOOorrLq^Ph zSUXE#p5qUG-}c>^uccjG-3OI0>0J^!EEwU&f6V9CKeuj#c8ru3gN_=!mmE`L;D$iW zIm~%JJ$rtN@NYH9eEs<71yS=O7D{QKg|kLdzrRlMDaMOx2nh7!>(17n+jT}t`kc9V zi}frZ-*&i-+9x3?{8imB}-hQDf;E;tR8X9et2nNnd$w?yRZF35m(} zC@De+7L`4^I;keN)!ypdS3oAeMMi#sRDo1#eEX>BsG12nkydh-_j;1d4j2rpnucbC zgwRkI35F>l!6wgeME#En^O4{9m>d;`bN5_s@N~h%_Nv`g*#t*Jyg4e%GfZP8J@j4Q0){MqSXa@p0GkwiYhWH)s^sI;KZ@h78Ke` zfyH86edNLZBI?T{-HHMCp>j+B2{1WmE&Y89C*K7KF2gz8*IhDyj#>Qgx=Tr0S5NwH z-KDzBT4QaG?vi{QPAALhcANgend4zG<$b1djlMPRjCH?SE zxUM|3v~V+buR}bV$`%F9=jpee08vsxGU&dmkL&kwU4VNL*{Lh%c=D|fAS$aUt*cYf zJIK_e$vkau$TD*fK(;%`P5gN0I(hyYc}(r@5Cc>|cyDY4;B0o{eVYFY)!cJI9_Igu z&R`fve7qW#2C#(wl0FFfV0VS&Dttg#;D3c}$nKsPE^(zGf~r6_qAm{(f~Z@U3!ib2 zOUw>Y`U`plwG}KfF6|@k?)e$nakeX>#?-}twJtAejD-@~@U(Tkpxhp^dDFTGX-N;Znm8HfPX%B!iC5$rRL&dbFsRz#AdJHhgD9v z@v92*Emp26xjB8WMY`ZXXnTk1K;iz1J>2gw*Pefoyp|!&F13`GsfhIZ?}_yM>8N!F zxFfDZ6>W7%%fr^L+3}|1VBvvsDQ36D0UGyQ2p?=C$$kArkC9CButwN*Mn>k5*EH21 zYTgyz{GKQ-lP@&wEUb;7E1m#miedm5tYJnax$ad{m<52fjtf| zT~nr^mE8ld2@W_mx!{Gv!1a~16NShPT#}f|fW{#%B?RculHx7UDuNcpL4=kN(gjep znsr8`gSDuE_r0IH12xC zmAhyYDT7*HkF=TY`R8>zzJIwomdEr7b4c`Q=SiI2S4AS|F!C(jMz8n2w&B|_5&<0? z#mP@QIrr%9(SYQhX>UK{1@`hZl0@FQBZ{rQ{#=8)_V(>s9{pgOCOh_UEL!#!dr}pT zGa#dULKmK*BsdZtmvY*I`BSIOKYNX=$7AR7*SC8bx%2&VP%lET@g-$RdT|O+s>5qD z8q;>B?(}PH-Mw#Ds}!OW4yURSLqVS%b(}p5BMJf^W+MQqvKOL@q6&B9`{_W9C@~|E ztEO|rDQW2`*?j79qt>`AG9xNIDwRrZ`sR5Li~#udACYl95)tq^3^qev7T2_K_ol}6 zsZsi<%pLUkXkSFdlT%f6wj`w>wZzPk;nA+`MUf?uei0kCZHm|^h4KaD$0CRz+bt9ZLT*XdN{n;aOE!w+oRzx`lwePMlm19`sAw>Y<;v{;4A|1U~%Oco*| z-^k<>D%Sp-QN@uH2t?%gV6%Kmh)kY=pL%|f&%sX&P!0w^9K&uISa(RK(GL;7O1y1+V&ot2&<_2$EwcT0N3d7Hq*F&H4SI1QWS1z&0=&prF=_Fd6?qV`D7tp=xI;;ZU#v3%}Hw36h^ z?R}M}_yf>Q5$`23HNqD1xz(iKhs)4H^11eSGjJ>18@k#Bt5i61bXIg)EY}iVxqhW8 zJY{8UG>3iOwlt2~1em2oi9^pNo((_3IcjWmwJMzASn9E;x47JroYE3idu;oLW1L+g zf9oWfn*(+?XnktxBc>yuUa^c0;?pBu-nLy$(R6c9{?(8>#jQK8jM}}SWzF7@1MAp|nb3H6p8|Kf2UJp_-Dkw z^nUo-U+JDnlDcO~O1lD-uPYdJVIj&?m%7sCx(hY_9TdsY{mLAHD+IHS#fb$E_Ymr6A6=HRA6qzDZfUJTj*pk@D7$h z)P`!hwex{oLgt#KS*G;lji%D6-2vSJK{6KZU8HdbxC02bk@En1!Gu71Q^yk1ILNJN zX87e!$kGC&yt+7O`=(YqfK<3OMd-m=NhA~L@cz&WaUn>2_78y5+M`n;bTEuQQ7B#% zR=b~6(q(M`9QgmJx{H=gIZE|Ny&Ge9x;(`D=~3N-mX>M6!vI+DOgC@5vdnIW<*h42wveq+9)&bonRy7rn^5h8L%v`Y@9B zOl0u?mC7F3E{|5w`WB}pI+BnZ@`5q69xYJjAZ8$)0(TvcT93>Z8x|Orj-!3a6aGH? z;qnu16y^}bXB1B&i0X5gC;&5+I|Jk|AiSOCUamy6Y&m1Njo>0)q&|ihkW%Tlhl-c2 zj9IRh&kxv^RNKhERrAJSmE2x^J?gXTDw6d+X(p@5bKE;`ebjVir?lnkn|r@g%Z&k; zU_~p)L#?f@R&}1;YRTi}&PlGMoVfVa>8n?%78OQTuHeenyXYe;F+=1k+x5gxcaB4C z(wZ_#_8lrXd`R{Cy6aTTZP=K;kv>R8N9aRpxn&aVH)zwk!6+@@)vaSU1uc?nerdP!rjde;9Q??q^o2Mluhw;l}!xu)amWI!Z zpF2Y};=s5)W4W3+JLk1%JLv>O5Z96kPn`~ZC-Op!bnA_;Hh!mm?|fy`JN%*gGfmY; zrKQbf@9$%g)BA&6S0`gBu#w0++;xZ%wF$&nW$o^e4E-P4!^p)FWYxXn8wjE}(4P*G zcwP~nec{FnV?D2Uo)!7~eAeZX0JD~>$z(y~JIWntOVgvd*SFEfS4>yWn6tBXHcz*I zPBTcxD`dM=_ip5c_f%JpkjF3Y<_hYL7d5Eu4y)PDS7d!ihm>uX7RJ};bZh7nGdHN> zDxwM!xDToCt&zlcvNXM-KB21h5_#e+b!}~ozLIZDB10xS5~R5pS&SF}-4*By;32)` zFCK~Jpj> z9NuWMRJwgdl6J0&`kWp5&-vWq+-0R9byADfY*Eosq#v{|hi>BxkrCMu>e#qkTO8kp zPV&$Q@{~y$Nc&MhNr$N;qjGFJ_~*fZov@e$tA$(SQ$a6GEU}hYO8AS1PoI6OT?(9m z`yr?^eoc1u1-#{*eq9UwMV-pL$PxLpj~au|^I%Xocp5?T=~0s3Z6)uxt;8v5B}YZb zW6c-esC@^nJQ*eKKgwV9nSa;QWHO)}dx*Z>{VLfbKZI<=zY`$5JRU@(NZLlu4dz-6 zC3RJmmheKR8mGfv-OHGxOPOPLs zm&x0zuXbNKdWy@e+VSZde@NS_$kRius`3k$U6<6CE@vcO;H~88pW5TNH=f)vJ~K{w zbkXjhaVoG!X3V4$c_Yvb-3jiYtk3b#mm~uh27VBezxZL(tXq?6~(0hH^F} zXW2}4%ndeBd&~}#&1lY+?g_<^4Qh|w=&(5RY;A2*9Ms~LJY?RWRm4PEOaXJV?eI2{gG zE`GvPC;d0C1I@2R&_atmLYG!a25FH0=??q~Nd?JD%`nDI0awNKyrv!0o@ej~;RQ)H zyt%v-8GkX8iv&zJAsKpiKPDH$liXG*a3aQ{SD-+0X zn54b{OgD$-kX-r&d7A!KA+=bn7FKFn8lReGNJ6OtC1DNQTg;sBX{fN?v%cB$sWddV zaYu_9Iq`}zCs0botkiNT%d26i4a7eH%kjl+Ac1$h-x1KLXV^NV%>k9eUmqF>(hvnx zoiNf6S`4k!A@Qd#2s$MhCB%x#?Ult9YIm);qB1oR{_ZGGtcXm<@V7IwHnX0i%Y@%V z@9Sn9oviMz6;GbAd>YcE%RIk{GNUqekt*8Z)myzNtL{>hfAl3Uu+SPv7z&m{4TP=G zL3JL5+M`>AIO1kNg2dBk%-3}KIXeCJSW=k#F6sZ|m!qz~PbA|%Zv##Kp@Zb-2&f;f zK^2Bd5%xn#h@D(paCR!vc%EOBw1ljr4y^FuY?P8(32`xxa)na6~2q< z9D{ckzl!*shI%KNbJF(+o#%+EjB7CX)o1N=R#YPS#`z*g$B9ykD>EzA4rfk|gRgg1 zRXOU9ka@mj&SF#_JNmIpGt@68b9~9XBlV7|Drdc)!+UAc{$#kby;(tD>j^{r zaqVVDJKuKrz~SbT#nnYMMK#je!sA5Rs78S|J_;X(=V;i>St_C9-*Je)f)E~=xU|jr z=36QtP?Z0qqdC-sszT_*5%c+ND?`_9UMCHU2pY43InD5xQIqc8=)=XIHpN`vH~#*| zR^p>Z#G!hB@j=@gQZil)m2q$#NC1Lrxa4C*jsQ#$QLab7#kI4SJmN(>4j7;0dzaGJ z=mg}eafW_VjuII!k2qABQ)#Q<*4FCI9#+*k>WZp4`Suq>o8k|?t!gTHySk1w&h&Zj zT)lGP{ChkuOCI~;#bK9-LUre(rW-qtQIW2QE7BF|N@AK9A6V74N;;+e+NeL&O>h!{ zW%`k|FWL{a`2b!|#Jhif^o zxH+~srYNRJswi(81B157>**V` z-|{Jx#qV~-$LH7*__ewPx>f4vXh%^j9~!VfdiO}}z67dHKLQH3jE&s5PaJY?u7xY8A4g2Ey=^q|m{ z+oU7r(}^KerJ|$1fiLyy8*e+xT3NG!+KVQ{s2G4ABP9VG&Wsjr%{yGuQYl4k%q69k z5_Nlf^}%Dj-6E3j+fNo+ekUq23--LCQv-7^ud4)+>KQN@^fHe{jCAmPk^B&Vd;kZ^ zXFyhQtH~t|N~HMKbJ{sxd5&8n8ORWI zBY6YlhZwAnox=-Vv@__U(t92TqhzSco}wg?C`m$5M^Yz4VeATU9m8cz@8f=Pb_*bj z-vP1+OUm0O-ZJO0GUX_f)f_ER=WU6e3IY7sbJ;sI9*YFkoZr(d-rCu7{#_hLOsAoy zFE_i0rj$HhT2WbE3j3P|lD;EKtPOX|b81@15ZsF+WLooQUu4w0-PqtdQk8!qwu(qy z@-Lol(f@}j{y&#^kbi|e$WBj%ve1bPVs@d)m7SU)mH&v%S=mtUHoMHl+1VKl$)O2} zxzc<~RC10g!vYDv4&Z4_}n!6me}HSdsd^V&{SlxW)`I;n+x?$ski2O zN0K?qk*wF-Oy${``DqrDF+C$U(~(-RJu%rS&B@C)+jvu&!I_oaQ)7b>_z`1qR7!MC zq%^L0OQoK38F!mqc_j{Wp}ojn>~NIkyqO!e#h73M{KA|jHQVhuc6FZ3Zc{nZt4xj} zXIe={Zi+M|w>UXool>^ln9CQ&Rb*BbNHa|_dNY@9j<3!uv}Bu1CUbgGq9dcoY>RAj zP9dzilg$TFurRRbG+d-Lf3L#kA7~7p62h$Bg_>K4h8m_3%4P zx$7G&mOQ7$nPr#8Cl~BWw;||-Xx6#g*FU*)Qkvt)x8|!W%mvBC8M*fCe3RXlUzF>F ze^H#9pPl70)wa)zd?0h528FpM> zm{p`tPIp?GGmNQH2gLC6)hQ`{U0V&7YFoLr%Ft6niLn|_ zTb`rRuj2@_buvO+lsu`#iB%pXtn~$S=q*thCunr1`bsrgBw5vCUG% z6(m;`Ik^JIk#tv1a$@piC$gEKiL+m+jpo{)uWF+1{{@E~2rTuWh%!-DHd z&CANmC^Y3|NS%qMq}nW}xw6obEX{)xnxo1|aU_-J0&fv-HgQ=Q$+;OulO;OVW=buM zwIeIO4Izs;eD(9 z#i0;iXpfM&eT5g5^obKsbuJ-KbdT>I?|UEV`3JJNmu2n=?g=7ye<4U&l~x)TN0aH0 z_%Mzxx+?a-}=DwmHLVrl?oQ0E3%PCPMaq`bEC5si>{F2UFK$ z`2F?Q1GkA~qg~8NMT!;q<$Er;${7Hg0Epe2awdxI4&`Aa|9pD?AcRE~2(+~VQI+KH z^J%Y`37lUs(=bW*r2BdjB|s5yK>GJm$J~h$AzetnFKWUNHb_}2KutSA9;2P4uZDJlKju*+X(T|_ z_>1~=#lgp?gD@AC87|8NZM@6_?u{-f8Y;~?rqaxQ^##-qFZ>6+b8n?;{p!4uEIkSx zBvQtHA>O^P-(lJRw#*9Au;qk&Sux%{QLtAdWF$^2Ve%tAXF`&^SA7l%CLWYG5T%8i z@WYmT6mj#GswTI_R>LKStjSzO)dO$Ds;S&Y>t6;Nc*V~=QHkIC{QE<{+oWA*x*t=L z*u~^$dYB7EW`(CK@p_c-p?@tvF!t`VJqr*(1pZ%SEO?gwKHVFUNdel?D`+M_f=zkd zM(TmPj2$?Zs@1F31-WkjjLSE&Hl zZyj0BWcVQgw!5gdx{3>HZrpHOJzFM!tk3ZcjbY7PbyaQQE_HorypyftR*!Zw}*Q<8B_ zDZ3}A<^KAKQz8~E;+fpEXwl-WlP9Vs?0W6Amh;we(Wwu&eXRcM!=^K*`EN#x7HY#M zy{eMe^qIJ8%Be*h&|>RF+EX3dK2f8mdJA2@Y#&xao)iPMAq(F6OVXE42) zRE{9fgo9ke!P2*nlSWzaeBFjM9GN?T29qafm>NXHl$_)o=;jQc`XqvrK_@jp1pQMM zz`|91?=V^b`9|rnx?4oTz;?+uz=C6~xOUG#vB%ooBBBpXI{7SlQf&l07pAy zZTnt*=6GS%Tf74+M!K>{|0%xm%s#aLl#DEcAuGeLYR%HZh3e;qZd){#r+ueQADS`P zFn-s>vx}um&wLztQ!Ss{=ldUbpSr=52j0K>qw6(C3P@^}_pA z7u1K_(xMyq3kx?6p?!j+WV+y1LewNTH^*l4%Xd2R^Ya@Td_P;6k|~NyONIK89$+8( zvXTZ4+tHAjpOv4P?`O(2=a_97`M!w9VHH|NJB8a6+^zF;h=fjbea~m)b34SDY+V3x}2Jp%gDBiFvQMZ97*WtL%Tgf&op1gI_ zCf+j~hi=-mb@F0WH`F6=gwTdi_RGMIoJ2I$(?&y;@}I8K6ZC|He(#>B^nMaD0XXS7 zib25`zz>R{LLm5nSU~e9ID7Xxl}wfbkUu#Y+4GZxO*4-Yc^B5WA~y19-#paTf@!LV z$nl6LlVQqlHr<%@E{9b9r=o)!7S%3P(+9?kp$}+lwFfuw!U)d@aHk^y(T_>#oKFH8mN@We9wFK84Oj{SvKe?5tU17cH(ou#xL7cUOp39NB*9 zii$i5)P#gQb>-5wl}9+?H_z|hQeEomGiQ2A{S~pw52ifRHdqZT+AH7{Z5i^$GuK|@ z-4)&CqS^1>*a$6!kw~FEL`L!~k*7d=vxdj}2^pqah{7ob2yk$rGy{YI8fT@ZyMrmN zQU&YN9<;RJr3px?T9Z;rc+x^!M8&D)>*7`S7$mF<(N>BzELpG>VMlMQ6%MqrSIDE8 zH1`U5+{1mu$cfdRunemgh}zW|ps`{_tRXVR4R8^)puST$T8$ z`04ScKPtiJ2W0<2A|KQ#pQ#rf8>hUw=ERIL?gt_feS>8mhyNjwp9(lBk=Fz?HRm>| zEs~H8VM{l!YFOyoW@|SsRIT5XxMkzIs`^N7!Dtb7U45uM_M-atuiu3>UaniBd`c{T zAYd+)OKhK#ZOvq;>ZeyukC+&=VR{&MW1gt7eAn*1>gMW%P<|YZ-A-q#5^Q*Je2d^3CNzyBE}~D4|cajd*j-A?cb!F^7+;&ea?})XKFUx={78`txhs=DfqV zY~CBxGNi=p`&CwvO=K&}1v2MN@B&=xV&NJC7G&Ji9XMe zm(3Mq)@HQoNx*vF*bgt8PpiLt&slPkKUsXN_So*Dd-mKgXNwRaBEhKNAue_m@#ugiCkZPb|V#;zZ zeM{no9qZHLVq&-Iwnm2~ZP82P=LKg3sprotZJNuks|nwuYu$P(>AmdhDWuugLJ~x! zmdZNSr+II=3b^v(hWvx-H`{EEgS<;(ZqF$ZS&}0xYtp0Zsl33fU1(XLPFk32 ze~!0p*qF0Losw#`r1Ca&jzvYLQfq}p>My$L-<1XiCuqiEd2XOAhKal_@JbRZNQgJn zgYoKDHc$noVWjeDgh7E|Tn`1c<30tocg5e1o)v%bh_f{$cLKHJcI`y6%V!J*GMI#r z#O-1$D6<5Ph$-R@@fUCGyAyu^*xA`NR~c}Z(F^Yeh{%Wm@`70YGdKzm@^!s~><@#B-^0>eNJ0flHm`__ibB{HK#b)g zt+wFRsVcHpGx^hkV|=^#Z@C%8-@Y9CH2p*GG|}!JMP31efZ@P$;W<1*>$O_c)w-wtZA#C(ml() z6o3Bp&(&nek7O>{frJCnpL88fK?Z&bT|A>|<(^G^Nn&o6F)lkLGc-HZ7zZM?QyTEr zGJx$E$`@RyQlSr6kc+T>WgN&-uhJN5eR2Gu<2$(3bXrEJRh2X^Y+l4FY3%zS=s!kO zn}q^DaX*8lFb4ptG!(BK96kp#;KLdcEY3Qeaku6+tMiwnlZ!rT{Q!0Lx%AcbtIbPh zPhT@oH;j83b;e3#gZ>5H$9624>q8!eV0a?@tBF)QqiWS|)Hx~FV2o#VHl-Tly>)&P zb%va-ifkn_LB8oGZ(@PgO{nd0&>Ett>7@y89gpPJ(AQX{$So?#VJJLdX;MB0~bq;IOJ z4U0ssN2|DiOA|m!^iNcF#LqK3AWFk^g`X*>Xq|%vmCe|oS#ThoiL`o$y0R_Zl z0qri}_QkbW`qd?Yco!TE2zdbyi203iDcpU=AW^P=9_#&uGO>dWp@S>|;w^(IuXr(c zOP~OtOqJdHli^+ZwhKUYD!Mu#hw0IJwCMK+7Pm%tfyt!;_Sd_g75fPt=(b?LY6a~D z4QwOOR`C(ERp`O7+^jcmtpGw9V5z_Xb+WEbHwdVDn9Pt?_jE#eU2(4y;5|&uJwp|e z{%n})PQzOqswrqQ*l3oDEy3P;vkjlZ#Ybdj*Qf}-&1Z23ys(u1*1@eZXyPs zQzo4~Zs0`P*DJP8`wsm0-Elk}M;@ZDBDwrB5pAju-LYULk`XuOwf(ejGn3GwMzGj~;E z%eMu2238FJh5jPSKx98vg)F-(gWJ6=rg4>ehYs?6{N~UVn-}#i$|%4c z0;l2Bz9aiu_=?Jc+6L9(?KRtWa~ZB8W3jrp$nJs@iTbfXSY%|<){R)x%S&JX)6?fK z7WZA;Ek@$@KBDWGGIJ1AmIQ5(MwsM@QC?cz@>1-}k%OO_J!t3PowGZ4{#JAS>gmrM zzX*@}x?1*Dw`2e)*^*JUB{NhioT0x$pH<;j;9xC95uinBmE=Rs{WUD_VvYSfSD*Jo^h> z)_v3%TO3#<5k%ms%5K^Q|&OxjhJF!6tXXJZl+9IyZ!>?R9DwnsvjN%!w9VJBNzeM zy+`9foyTh&x?R9FfyJTl`l^9QzhXH8QFR#r+Ds zS3mm1(Gk-%t+JDMBd52@*kTod1A=$VSi78ykBLEqaO&8(Pp4Cnl*WtGiD>T6Q*Xr8 z##G1GNY@_S@m{+M-1aqCm-KaH@Ih5sLm#Fq5&9W`C}|Opgjn`~Yc0VnTSBD%zzhOXQLgGj!3au<~t<30!81F)>Lczcust)^ptahI1P)sxO{9 zaIS$rcYMz!Bn&c3_{NIz-OZ}HjM}7fuB_ZuTc>JHXo@K3^6%cdd-Y@K)sI`g{SEyP zP5hk<6A2LPUZE=gu4+7b_(Mu zjzI?o4Qp6$c%c(t@4!N)x*TBU@DSWD&>g5u1ksxV5UEpK(G!&Dq&i6g6x7)|jS$`c zo&1iK#R2bAyYfw04xV(s=6piTX1^)ef&(7jgXnHV<3tRDP_F{GQ$nGX_ekBuz8!IS)^gU^Pp~ww*BL z5jI!BBpR*BGFmJ~t~F-u&K2q`+1UlxYHOT@mAq#N_7;Xn^p!P+TF3-=@nVWmuY_&^cyLm?hAkz}3A_aL_-NCxL3E> z@)d2cqS!dC@FrQhI|l@l6ivIhi=mLw;>e`H6zbFEl7Oe#1}bSVzO^%UYW3eBZ0@sw zu>D`yw7-C9+`oZo{|hYbZ;lT@X-qtp-BnK%bWASS9ZIU zup-S~IoNi%pK$*FrJ-9O7p@;8>(*h7TZ}RDHBIf3f8q&ZX%=W*!?+WjWTP13jO4N= zV%L@}SlpcZ&u`rd$;&6Ed>qMjS7AjYca`MhohLf3tC%t~Xvi)xStR4T+nDGrQ>g{F z1#{L%8bq;PVlM69mp8cQ0@M%W4KHzJD0(2(DZ90!P_t0%?{ohn3vBit%^vfYyf7qu zU~xdAyD!J?YM&!RNKmURPcBX5g2jo+SQt8((cR0rb}SQ(u8vYVUf2Bp*y;bHjIo;O zOsx&;Qjyi5jT#w`6xKS>t&IB2%yl=+bu-L$Z_U}@Z)SayQP_TBji8W|MgLj%u^PE_ z>I5`jcN@xNrgu1knA*uQxk1!K7_k@ZR#0@j>H&9vjRRVii4Guw$wUW+!Aa?m$z@uv z0zrpFo;^))HQ{zZ*+49h+=EcF7E^8;ylKXE?Wr6*WUt%K>h}$*)#}xsU}FeID7m{D zeteLo*N@L}*s-cS^W%NxcTd{$3c)&&VrgG6lNBBp%qE39@DfC%WK`!J>k!buRM)0N zF-#m3&m8T5gTH0D*TKJg((BmeB!7>7n z$AIyK%ArF(DuZVRkIc#twWulv5&@@|-_`%S2H1*9U=yr69m~yP%9UW_J;i`GbyGaC~d(;h9^TFqXQ)@jnocO^>r&q`Vn_fX1_0n`m1*M?0IS zu3Z!iDJ4t+SA~DbhJl_h4i0Ze7C?R-AE}n;M8m}4;UcPS3MYz83Dri!vV)XPv?!A* z!oyL~rf`wG`HmQ8(}^H59f;#W=NI2WdDEGKRHq2vb?v0HNd$!pYm?PWlE*{z9dg3B zgFVdgZuFPUgM$Bh?WAi0QhOBjcSz`va}+1o1`68(2DM9#o<&T^61!GdoUKI zVB_K>#9Oy;g?~T<9sV=csL+zPHT}Kp2(1!AbR8ZSc8tV$vjc-Xth|mL%xgpxCorIg zL;=yd4%)#)>+t4Pt?K|`Zwq@6@zp64+5$A)X;_!J@1d^c{oKfUE5DF=G=le4Aj7O2 z4y$Oue{F+R!wxFOLBee`zMbu5hiKoQ=X<0#oTFPa;+t~U# zS=_N@ySz215k6xz=tK?J$xnH|y4!Gam=9z_4{9JuBeazuhnc^HDLWZgh;hr2tKus*svFgAdV_^LL1oe9v4<)!|`}_yfvd*_qPn~&EdoVR+inw z9>2)$xx8yJAt3UR=1p{abk&y_KZfbdGT}Se@*Pch3I#QU z+l+}A&#!A4+RBKr=vLh0?Qkm(!p38vG`0!9%5{B&TJn^VLD#3vUoe%;SJ%#-d!G}G zbe(bv8qcl8o4-%1$EdtE|Ln9anrUa}UxWO`y`^38%5Pr#V05Hx^arnf!y%cz9_bw? z_QPSQfRfw*=5u!+a!)4gL}BESA-~W^AZvwH<{@i^pn#q{@(V<;dL>R2z%TX+llhCE z^-7Zofl7ik(qNJ)4r?bGxl~xxv71l}-%6cD5Km=eEp^6{im*_B{!gvnE+Cpvx!bxNe z>{Tpc0d{-=Ei64bt;poUAGe*#d_?nT!3!YOC9H@^T z!hcU69&(kwpbia6oHR+bz%{=@%MGJG>w(xEqN4o@=|jhda0uLL1f`CYt05!tX9Glv zefeX*79!Z%57&Z0uM5mSB;UOK1d(5i3(U;okbPr9Wqg;GtY&@XHu?$cecJy+U<4(3 z3vu<7HeCZPK#*j`e+a)SlQU8?^c-a9{uHeZoffuO4egPbt6l|+xbz|8)zEBw8Ud9t$9PYM z5cHyKn+E+NROT&^oL7=D%Rr3jL&pOq4LC<1I%XNK53StNqHoskt1N7h-fjNr0|ut| z`RTQQX1*|VUwlhpb7AFPeTx(Ye*K~hHN2+z1U8MJ-7JHrn+`J*LgVOuFM6FJZ7^xW zD5gc=7p~Yz^vOdQBDF}dASa*|%j4lb;DaPk2AHp61uR}TbqH4cHZ9y zGjAaFkw4j|Pj~0v_H%dMLR0*EzkeS?9?{67CiQv!Z^f`pBkj$St(@22Vv;fqjyxpSR25^PuzM2`o8C-Mqr~?`-IdH1t^iw zGF0S4P6XHZ1;Z+^nFg|QY09wK^x=85pL#=RK2{alULraf@bqyyLM{IitnOEr%)uJ; z!X0R>z&5-{lwiIP>C(k_`ItA4rk^Cg$UGhi@>%ZPO8M$o+?CXo4eJiXuqBM9%H&_N z6^w{VM$XFQt4X3p{$)JYuZmG&Z6bLpRt%7myic8 zkfHC8#~o6N;Jmm&~1*wNS@4-q~@jCQytQ?&~$( zu05n>#}1^kJYouvk4-s0^a`6 z96KfwzUexlw3nw>B-&?}`zF~F(v69p2mQPL@Wrw$3FXFj6Mf5!6$SQk;X!}VL%#08 z-TYy1iXO%Vn^^osGclO~tg>9`c~W?ij7Hf{3QviyUV`V;1n^-3*#sir^BnlakPYad zyDFum^pcF^K~gr6a7%9t|AqRr&>0c5!IJDsDK$!=)@`+^iwYfucHUWx@clbv1CU{C zIn-L=W99OdMX#R+Uhx`vb>1FP*AfYo$3NOV_i{QBmWarbBIR3ero1uNg#}i9y(_Hl zOi3(BP+KJl2`Q1OJdN?J@K~nI%}81MW{98Ahu$6IF^Sd~%69Bg7nbDZm-50QqW7-G znpq0eyLwMq!&?S^j9?;vlDpo8N$#UP6a0PZl*RSN-Eo!DVsAz^J>3jM7yOHE#g5dJ zZO#b42xooVZl=xEA>LLMwadV<_^Mr9S5sV5h^0!+8c3c)J&aj5!YPb#Fi&rbJhvs? zibLMd65&*L-~tRo?%QHwC6=OMYgJmYUusdDH8l;gm{#BJ+fa+s$`E7HNhZQj?(QTo zsyZ=n?Z&tNN7#FSH*sxU!#1|0xeg%-@(^3HM)ZUddJQEeK!DJ}1TdJ6ZQOA0MY83h z<|?^Y+%edI4Vd10CqPJmgc2YLNeBt#jC5q)e~q1c-}`+3^L(F+Mw*#(&dg}$oU`{{ zdo4^D#t9J_>ihx^`irI)J@qfp6YF7Ey@1D7`U2(#TZ*sBu@oIQdeqM0R7!-=^!Pr$ zrxWloh&A*;rrnF}PBZq*KkcW~(#?I=(glk=p~sSe+765LFmm8taP6$z%HDA6(+yum1x| zJb9w=>$@^rhsBqbcDGBaNGy*nrH{!Imo6ma)an0$L3%6;oIX`HwQ>3hz#xC5KbFRp zCsrg0HJ1?$@)+v?!>l&f%4@4T!JM^Nl~N|MygMF;Z)<}o{hxE#B zpbfV;3$r$iuL!bE_7%aCS3W$93-}pri znC75zY!Fl~dpRi^VHGzUwl??*3YxxKgM1Cj`VN!G*U%UQ3iV%|8XKCi#$plyUowdg zBt3n=`tkyaByOUmc+e0Zm!6i^JXADgS9CU<(@AQMRY65i}8Fi087pn&=$&yPUEx zc-Rh;7*uiK3xitqM9UoZK%`g0N;%eg`^Iez!;tyb&3rP2}h+KgTIjb22@ptD}%PD z?%ykWkpH0YK4&!Np3Tf+j1uXtRD?gpAygutF|Gaq0GPx9WGOOYKlbc^K7%0~hdO@s z_(J9z5fB#61qG~4T`!+FF~9IrrP{a%#J-F)7)F#%h<9*>+Omvt{JSRJf1r9G-@8Aj zVY{+=Th;dF>w`}csf4CY`Y$EVt@A0pGw$@0)O2u#Cs49hT-5K%*j?ck)^=1JO3(P8*=d8T+U(WNl4LSI-&a!Ibsjdk~e9wsy2W0KZc zc$L$%ndMCjIPj+>?cAl=Ek~0GSx86+=@8l8CoV`WUPGOJq?}xEUn2N!u?KB3SR{nW zkB7bW7W}N%TW~x8_u))G>^+{FG;iYS6~T-k!0pk2nmh#F$xcsKhe=|a$UmaxH7X7c z4Xp_P)x7TgYx4O=q@14!Ger=3)uBsw>W2ueV8_FK*ORopfL9CMuyhx1LVP^P$?Dw1 zg19jyN8nyFYUEn2UYDV?c?=OHWT+CMp_zXO|i3Zw@LB<)lARuP;BMU!|$z z{0ld4k7LqIW~~{#6T*06G=KwsEAf@%8x+%C8$ZDp-cQ!ih7JO*A%w`gVF(`B$h`uS zN_>7|Q3fyrLqz`}U(L=z1UoM$%VZYp#&E#c?Sa);2Y6{E@CK!wUURlAt|$f(;iZ$P zk!EsB7B8B!aE9%@C>OO(jfe>iw>i6Ll8kX?)up*EU0OXD%?+7K((q6KYL24~8LG^r zyku9nrHELO0~{{&YMe>9DJRElFuPXp@7+9i_t{^~5EJxK8?w`E4?N?-cO+ZlKm8pU`{cIubI(!s`@qOJh=Gsj@6G z+dsvZe$jEug*+A`#6H22)hW%8i7-+o_&fWMJ}mKevU&2JE||seol76Zs{t-#rV~9! z&$&RS@f_Z}@>P7F&TK^TPg%?QuCk!4M@e#yoO8jR=Y+Y?t5?JaGa^r$XJ<+Kb`*r9 zLuWx?yo{&`jS73C2o~N>t^;0mPNLBMe-|ZHXyd=iLg_{Q-^cq3ZTq0@&f`SeX!X?q zp-ob?LO9s};Z;urJu@;L7A*1`-&#LoJI0BNq1j+@5wEnhQTnk+moA}iUq+DaA~IcE zh}7a0Uy+r^t4OrS#*0_;m~Am)H=0Hc!sF^@-N4_Zw03>TEIbvVn zCjQBR)PpHv5j_GbmUi)Gx>V#wXNed8^LZA1Zi}U3ZJ&~{4df#cJtCe#dCLM?VQGia zU+yLvi~2Atg0(7`jvwUMXu|SBK)r|H$w!RDiG1gT{3MI>X2HlyLeKJ#6w`kUUq~Ba<$5QwOz55w zC;uPbgojIrDZyj8R&dOD{O_WNo7D`eRo+=pz7;k@?*5+_P}W<+$X+3&Ei4`2frAzP z*C(tYIXyX*TyrWc)hXk_@-vZ4r0a{BSVJPYs>m^AnRMi0Ec9)4rSu}hgCEa;FscRx zii86EXi%L$vyB!CB%nZUZl+nsm&WoFZ4*mvAQ9bbUD_MW3^?2WC5ibzGgEozj!P_V zSOj|2stgtKC^ECv%BX@Q^pzH8$+m*ZiUO`8zXpoNh??JWsZbRlRUkYmGD-#EC%V>6 zY^Hn3-kv7}{iJ_BNVBab>vh(4-FBT^r`LJ>ifq*#aG7$*(nW5sVAs6m-&R-e)mMkP z3OT-=4_9?Ld-$;af#(sJHy^mTyVD+e_dD))^rXj~J5baU2*Xz%nW*<%=_>Vot9;9? zT&bUU#M2dQ7CrCWAwBeW++FXu>uC>ncK{E2x*Ya=pg(fhs49#-WQE@YJg>;2 z7Cao6;rbN+<7P)xFT4|uDhx2r4>350L$>V}!fUt4O(&Z(o2am0ve?O|)a8eUrWy35 zU<>@?QFX9pS|_skRq1tc<#6{qyM#5Y)Q1JpTj;{$qBDZc5y;g>zG{48g+`vOtQ&qGrAMArk!a)lzTg+)LDw2{?RB6gIl_4Q7 zSzs%6>C&7hw@{~tI5Z+YLWNAU%;1t}fwI`8i)&CID|RU<&#F^xW2#gU#i4MTS^g52 z3F^|qbqPXjF37<$t*Z;9R$>)8-haA4AL`@6`|v*h)di|a70AJy5#%|AJFC=Q|L=DW z{KvdIyL`Dw(EO4d0}P{>-@|J160}hJ+E4dG?Ms`09Lqsc_}ll@TpG8U!eg7&iG z3zoJa{>Hb#2EmOax^$^?#q;O8c3sf#@^%%}!*+S==X>LAJ82gVfHYfUJ7IU7OMJ0# z_k_fSheHSp!dij|T~1+=5|b#~cH8#<8Vj}q4u8NYx-6~UT8ZgCcOS=?YuDG-WVZy~3k zQe7Tf00u`WsuzVABUP>us>BGWWjjm43L~miT&1ekSYCt?=$1=qfw{aA)HAklI4<9M z3{_Y?R^h)B-W`UJmmWZzTr%@DMpzArwEvxCIaoK57*?B?mY0&9f+X&g3`RF2Y>XWI z4gG&3BcLGkp}4p(zc^D_O&pCTtvNN%H8&NB-g4Vov38GcXJ!+_$BRq;*+pzLWtdZQ zUGq|tv#^V=m<+l~`aC0(Z(fTv$V<~o%~_@U$Y>X1p3amGx+zUgijgs-kFDw_N79jr zE}%O`DF;DmL)>3+Rjl>ZZ#MWdbA%yh$2LkLjmK_h;B_D$E>+Mo z#9#dCn`=b$$D>&~1DBHq^+w3e3NWlciPXhhsDtc0lbs3%3gC?7G#By{6KS-Ph7FaV z!Vmi^ez8dh3&%OQzrwl*ZZ4o=l}^`4?(byPYv^}cy~$rJNu`_a(|I>J+V>>waqx}o z*^`R^M-3+L_C}+5sknAVvmq}h+jO4{bjdByf`~mm3l8#bbnP~V%)o)l0Vzm8Qs!(4 z-MkS{>Y;R=jAoJWk!1D^5CknFPOFE=sHo5KLC|{WO=Jcw2aV6nWF3Cf(=`1-=98Rc zh&3l=ry?b-H%atk=yVAf^h;5Cyn;-Z5Z`84xMRsWS&xnmOlT(nU)Y~~3LsxE2Wv0u zQC!B)#Hy2#hy2?Zk}zKJYAO12d}FR%Ul17p7MrJ=-FGW(BR_T;&|krSCZ_g5wA&&I zO=w5q5=kZhfS?vrFY+;+NygG;OiGR^-7F`|#fAB~aH!?vYl~7$@W{;vjgki)1UcfU zI>ZP**iJkcnEJTD@c=WvC6gYK$@a*AM0W1WUZuqb1^J%r!`J#JF4n$>WZ!tjUy@Rx zL#F;>a)tjU+pI^{wW~Q*ouiV|rD6b+lYlu~YMT(fHe!A3I@h?}ajjtosXsr(B|lY_ znmt=Ry@`7)%gw>yhz7FuNQKg~Pz^HB36!%`waB%*JBd$n(?_6TWOZOd?%M zwUUh+bh-^nq8C2TrP&glpPxPeZd>YW5J~6L2@)bQ!bFx`tnl#%|6nVUPxQJR5RU89 zhAll(=#1B0k?1|Q5KL9C`? z3`fpM9+R3nItTeFCfpB#`kNIV+yHTMQF4LWEWkKj)aE2pf{6ibnt|opI{sn3MU>t{ zVQsSs9}%_e(K&c_-d18e=ZBDJx3;rF@vhRYwg5gr(p4#A3#Jp`q(!O!Uvvad z#&UBQAbw^;SsiYpvKOM{`2WpXZ?dwmS==mx|rV* zMM9h)FYbrFv#XZm>*b0-%lbQ@p2iN=zQUd%X!8f`<3`n8J8h!LcbppCM78AtK4Ck8 z=nev7norPHU!Se@EzR`}Eg)sWv{iGj98^w7|W^;ZO zQ+KT4%mdk7J*e)&p%cojTc0#vwJ2$^YT>3$0Rdaq`FO2eJcPdEox%8JY~AW7>tH3m zjazr>xMtnC$cqt-H^RH})uf-iRQwI*Bl;})6T_9-eMfhZ&mM#-Vs`zb0_xv=Js_*=hTiiFzE^U z82M-7STXHK<*U7^opN5p!bo2ovqcxU)mJzXzxu79aNL#gg1)nVaf{c^b=w2>Y|39) zusDBF!Tf#ence83abfO02s{&VOsT3;n^T$?(kTAx@sqy{%Hxq|w(N#$(U~}q-scH( z^5MCoH;D69KJ^#441&m*+fT2oc~)>W=~DL9w37u_RA;lUT)Fyy1W8+N?XnIb39O$w zE?T9^&Q~F{i`zawJ6~RIj`dU0k-*sX%|>!p4|b};F*YKtVeYFolKd0kmieV#JA*jTdztW>4! zEOCe~K3x`@u1=1VhpS3=DlZe)ZzOv(^$F!%O-yj1pL|PjVraB7Av$&ICK+WVn{tDS zVz|)qy2NJr&icZ-GG!ikj*P{OA=gk;C9^HJ+-7&G$|57wFR#oPg?&SDJ z+X+P0Z?7At9}zX4OI*Ba-4YEGPZbo&1PY8ISQb--a!Ky0eTiq7s2}vt9ztC6k>OeS z_gvxGL;KF;FvU=sLjsHfG=*5k6F24Q)I;lv7BS@$^drV%?~ZhflBHhLh?hju5`Qf0 zM*M-;1Mvr#Z^g&y@}o#7ydx&7Z11w0G=T{?i|CL{O^h<3T+;x*aW9Z%Hx%LA z%W4aE%6HTzhL$UfqH}|A?!6??BJIw$N&QYWC{6+e9U@j{WOuB zk190USMDEBwkuG%YLsQjj}obPupJGQv@~ol+aYhRiT2J{=0+L)ykv-klV@f&NFSw5 z=Cn~MF{(JmH_ST*YGS^nJ42Mw)#^RR0VJ0kH|;L3;da(GmmZL}H^*+NRhEUCHh(4S z4~A-qS8@3Es=|WmY|fBvsA!QrOBCB)TL-XSiD7|33DpNU;w?E)w5_4BFx-oy-V)2k zjue(K@REcOM=s{OFV9RhF%_8lFVNHZkT%3J3L>jhlIJdtp3H<&M;$!b4DK2#(bM;8 z!8chp`SRksDNH0D(FJ-kUyfAB1^P+|(cR6vbf)|}riM5gFw{w8Z)4pYZR{*sGJ}+e z`iLv%SIw)M-!!aZrU}xf)h|i4guKi56Ol^#h&`UXCmQD%>Rak1U*j9QB~%$5n!M>N z87A^ynKqS&a9e7cW838inoD=qD9dY1t++Bz$WwNN?E`U8RCEGl>NI&pTA>FhsFd*z zBW#?+Co?QNo(nZqCN;=+?5x<^q6BPJWLNnNkuN~|-NccCckXA4h1Kf}$bH+*RVKw$ z`^aeu^j6X^Io7BR3Au@w$~U>_AQhmK(;SSdOLkjOEosq9}%9YwB^6;9~-Ebp$782!=8)GFAr-GiWcQ(n{$;pW_^*S zkp9S17oFZ#8L5EV6lAQ+^ zPoB=4W5!eSy9*9e&%yN-kY?89XTz?|Hf0sa$vkm=QA`|A9zAJ@UWdbU}g9=81z6%1e-kR?LS(EJ3C(+{X8{e8rWS3rg$c zWT7}eFFggMxl#1v-ik`Io8zyLR9nRlWqG}XkH*!CrkNr#-|{DPFl_JA%ox4WH+`yp z)^tYiu`G_h&qdP#20B15qizztjt(fN1Gp0U-boL=?AnZ{##RmP(|!rOx4_R2;lRvt zy|Ov$uKwChMt|~T3AnDy$p9Ted4lo=G9a1^;Nr;p9w+p&Szk}p`(`nEnptLhSMWXJ z`*yOw)QVvLKntk+pV4YQk$z2nA-hGqie|F(qapMK*@a1%PNy@7v=aIY-9g+%Po}3?TQUsq7j!qDK)x2)5-gzX z6+U4Tx}a^M9+$~zd(7-cBee6cAuJDcAQF_U8!*g|5qwHB_)6ANO(*OiBRZ;~jCO+r zvX(9M*;O*2V+(mM0@b58%Uf;cSL8jLl{bq3Tgw9kc?ciUfylrMc>0%h++;0C59?^_ z6s*b=NFg&7(wFXn`(N#`(5P2vt;ZiWwb9tQs7XXKYw`21U3CQnhrJ4kIN^T zN0{cG+jHth{sl8xxPy4;$il!Ysypiai<#4JD_FzM=F_W-;I~?78>^>B$;y~ym(;kD zK_!D~hPa*{M0)uB6-`$9lE8d2>-WD-#}SwM-xxB-x{S?k&f62V{j00vo2G1|TQAYL zJQ^9%N8LO2BX9Su12-j&tf3oQ>H22yQY_NXJidV;qA{eeHxWV^5hSRDEd2Rc-G!F? zOS?(X9ul+@!T`ejat=v*M#T5X_b;b_JJq2Z!Z1w&z#){54yL&OMy7bJ z4cQz;<+JEW75%v6qx}ALpI+G9s6UdjHM>Q7WMU)SC(yqinLm5@oP zWR%zG*mL2#SCvMj1*L~Er1YhL^SAs#vhA-~7dcpGkd16W{G!CQI)=(JLVmp=8q~ z*daO^e1{F+(s$D*T81{I^#u<=KN&v`N(U1q=h?iX>xVo|+IuBoM?#G9mGGGUa9E;4uH>o%75_!~|U-Aqd0&-}PDR+3W&s zVTzd&1TO@6xMZPJGRPNGIr^u~IYq4%q9#e%`Ii+xhWB!!y*q^`cq_XP7q5M{P+fjAIS!Lw81FD_!hmRn#@kn{* zaqAB?-!ZoCZjNR)R|gS0U5++aYobi>c+Zv7S56NZtNr+3*3O)5xh(}P)h#W1_ijH> zafB&9Y(CHilQ&gRpR`Qn>sWoqRND!OW$Gs)H&Li#2bQ)AmZ=h}-+1<|vSX0gs-z!? zS{06Og=NP`t5TrhvO1ATc>dR;uUrr7W&>Q3>m7KtbvGLsTUJ?FT2@(A8WR~A8xx`A zKkXIKwXUkNYh9$W<2aqiF7fhOsA!7R)N1E}uRtK6rt0I&n$QO*U#WTs7%h@b})NAG**!(}x0pKU!uTDJG+bqWa!n zb9{&`o;~f=zGSJ_nk8J5HP-)?T(vitI*x??*_n$NUUp%)#WTueTwl$L*a;aAHLtA+J9YQxP2 zCSOx#tWfGDj}usPmbxM+5h?s-*@kFyCPV+Sea7a2Coe5FH31W112!cX%gnijrXp>b zDTA@Rpp@OP1EX%nBqkzG8<(h*er#tqV&$R()G2K)Bkg5(-Y$JL;(R>F(-|v{Q%nup=QSzxj4|RepVe)+{vW z=$_m@Y~c8e&AJ3re9_u{hkdRTG-R8zw-+`QG?zDHpA5!+M@^2lT%8RSXuU=iA2K68 zLKBo6kh0!5*I3->RhyWbRZ&`IHr3=5Rx-xSlF~v`R;K>jO<=|CX4m`uEe3UnA%qDr z7DXUe+7KJ1&WKNox|rE$Y$`d`s%z2JuF*|l63>)ZL~=z5^C64I<+o^>lZwWtr4%iW z&;%#PnoDZUwdyM#=}R;6J}%Z4Yj+3Nr7@3V=dR3Oz)0V>%eE_=)n3*{zsytZRPUg@ z8|VichTq65F;r)pTWX(gBn}(zgzt}NNHQM?K0BspE>kwHz$bVlQ=-`eiH{D(a*fRZ zD2kK1J7(A=>p(cHG#S%!(%}_O)oRNM1UBB7^iYN$Pgk;;(4$H+MrEx&RJo0jGWK?M z_?nn*c6PbBSyAOlCF-KwtZ0UQLAJ0N>U5(_Tbxpa7#XTErsovGZmmqxg)t}K6-rZu zL)j%-lNytptIjJnW#wb9OtZSO0yNionv^`HNmB?l7>2*#hUac;*{t$Z(kmo9lfL_P z*uCH*Yv`aAIDH(!pe?cLDPK;WL!D|XartiLoQ=7d+?d{)Q9&nP1N4OBsxG zk)xg6%k+vrnzAc1tIo&$7V~;OnK=0eMyj&2bDVQy!}*ZM5x0|WW?j#D;z{0{a>lb| zYQ+~iW|Mbn{8lAp=EaRP_BRg6q}}rSC9aw^V%^fkOM?=bfS7;`-Os<$w`g#7w{Loyr5QVI3*==YtHYJv-YE`uv6{dV9 z$5fQLP1}&soKs$~y}Wo&!XajLT-H<3WCVJh4muqA*j!mrU-!+W(+#-iRd(*T zc9AI;>3iRF&bb`B(Ouzr)rMvo8#5eA(8iHenaQ)*5c z2M}o;4@o+xlYtLg{+w!d)79q144u#a#inFH6$f%}^l#uUXVI@YjE4OPBLo4!P5Lnu zvJAOgKDnFn2YIF}_b&4;@n(7xfPU{!px0zEnRP z5xWf_bR4fPWD1TP%RMfaA{I!7&L4mT0}^J7VN(n=>@bZCVx%k5^3w~_@)Mfko8q^V zf;X?pP^0lVbv#M?8R>9_IBGD9pG!2>DMDx#jCodfa@n$*90N?w(aZ<3bS+)+30(xP zr$sNxdndOaxxxKyro-Sid2)Ks(MulYQB_JhutkIb2z5M%OM;X2x;x{qMzrsYMuRocxkbW*B|3d@WCxQ1@Ugpe)a*iIA@vflZ zx@L1-u_9HyiaYY1-gEijzn2k&ijtG1v^;`Fl@_Kk1 z>goc65Z4OYN(W}dF>x8uTm9tvU_JF+o0RGs$mxT;X)(RVft%fsDYHHTSf!!KGObQ1 zSsm)HQIaL~fcn(?-lo0e9k9wUW2HTOhA&2@?P51;yKGK#SVam~k#a(_V>kL6J~lT` zFUvO@borHJoF0^x;<5(^3zX(I;=o_oMP@U4M{hctI@qqLH+0_4ZPr`lnF3G|XZ(+G zo?rp64OjwOIIsk!RSG_Qi4!2bLKNelwH72p32WhUCu1z8KM`I7cEx0`*D3_yNH|-b zTCOhU5X^8Eo!vP9&@{QtSv+n2szn=-geEA8$EQLrcDYkiV@X|^Fm?D@)J|Q*RBsy& z+*F1tsZ(v7)`;gHU3ng{3NfjI9bN+f-|WT_i?;)1JBEK3S+kek0s^eyH(j!A!qVFR5`B&J zw9WDwmB3alB8e=0#RmrO@+a^7an<$lsR!%!tz=?K>LQNGkJVR|l_>Wed9d%%(pR(n z={v#R3_o%evhwvlIZ7YPS2&g+(gIWTA(+fcb|_}EFo-v6Tkmi3hO!2 zKpR=0&Jaqavx&h4aa}`>$zaYfyJna{;+{#{U$~I75_1};-8r!C8`bHw{Sy~q=cJOY z`lL8le6a@F{X${fk(dApSLsiU{&p(TuET_k528tag z!!8P$`hO`QCDfp*QCEkTY}GNgQStO!`qVaBM!r^%qsVZWj%2M5;N`-N;nC^j0?Njt zGlXP9szO6EP?)A-Auke{44@7j3n0yKkfe@qy5uHO39IZfofbK5aY8CEZ~7KF<^ufK z9rnvQ{uam%!oftQe|ZJYX#9>+xT+Nh#7=YRcqpb=qgJ^7p&-JFIr@*NGprhRz>mGzrS)dr&*TG`SIBM*2UMKQ1(`|v@!cQ}4k0r#s4CK`Z%E1Q=_c7) zEWPd~Nw6ANeM0LPQ5 zlcC$VfZXuxPYwMIV|1P%!VL8()|O}NOWqd1=xa7)jpXvFaYcY$wkdK}^G9R@qhI`L z4czD{m2vr~J*FrmivxRDomR9yK3cDjk1O(1f(}Wb3(dxM5=Ik9P6>iD5=k?pcCf0X zOt*v6l3`zO)5~sDJ*A($n8WCAtvs0z9nUNgksIa`N4+e~ezU)@50c^1g}26QsAO(P9N(Ub4}D_N0$n=IkIiPIaxNy$UYc#_Qq zdCiaVs$5fglT4Tj1`yJ?>mI(p`O`u=<>JqLb?eqNaO0Uf-Ge17{Jaf3E2_y@}Aa->Gh zp+^E4X|_8(5`@T(ESfCGA0C}KaDZZ`SVn_;*?|0D_2-$bfo?^w}wcFtr#iqeuAn>1>|i zU3o-YP2ThU zVb~ADtEkk6I$*QPr($zUQcKeAih>qU#43)E5djc$b0WQjvB*vI=Z}a*2X0{j5ptyc z$dpyYb2T_S`r#~QQb%SXNb^3}LR{r=^nS4O9I;p0Qrtu)mcCs88P#jH_hoePHIPY& zsEi|(NZwhD@%k5;wHK{saq#?NHwx1^Y!qEGa)rYAMOl)Pm0ynbLYpTN;an0!p6-|A(?X8nC_ z4m|R4{A}AQGLl0Y!eicrR_SFKsr19t1-SJAr{!1KX3^NXfhL z-JSS*!i&<8IF5cs?YNG|Vrn;f1a(x-Mm?Yd9E&hJ3wfc};HUz`@*j#SBOrj#eZlrl+U?a|B*G zHc1^7C5tpimnI?g11nPU3)2hbLdQ(UECd-t7q}dAiZ(DZfZdE26677MdE^yK&1E37 z3#P!5Eme>&05T=xzgEVQ4@ER;0^o81G)+ctkOHuT-2h!@C>c+Z?{fT-zgX(|F^%R| zi7M6MMPYK=DsdcOO-OTdwoMXylf9zn>U-Zl>&$YQF?Y=u(HzXP2!r}XM}>=jR()ub z9Eci{Vha&PnztoXV|47~q6gfxGkv4Y>OtBt0M51kOfuk{>Td1Drc=AmApJLxE@D7# zJA^t9>L>ql**Wsg8f75q7D(*z%8+;be9mo_rv$}pS*cup_2i-Bhff@I{rb|Wrk1S7 zdB+!3(4JLPQ9M2m>GY!7+NF*1ZOtvW4=NAbsyUUpo4J%5+O$+29IQ#&sysnv{q>j( zOC#d+6Q67700uWts307!ClPdAqyT{m2aY9N8Z6xfpf->xbc}d_0$@i^T++-~CHjhg zIsJrxG6(3oF+ikclI~8#|B7fBmf)wvI~yS$3Nh~jHr4CA3ou8W0C0f7oo!vZQ z$$Z>D^z~NZ26`<{>D2q~gtGl#0O6Q#-?~=BdO`;5`L#tpW!$B?-~xL6b9L)=rS&fi1NR$6Z9#QwJ!PK3Yc~XO zpEin`sw#KvlI@Dz;a|l`3*Y`uE7=Xx28R!j2Z?{OZ4&Lch^hI-%S}y9%BCjVgJWL2 zVDw0>a^^_NUJ|%l4}xPJNB-*9@C~<>R=rqH19#Juy&S?*FZ9YGFEDnE@o!?9{6Xt2 z*MF%G;D({v9=%C3m|SoJy|ftE__&O;cqN^%v@fpq$P=Pd<%f=4klmYoW=ed5HXZ%Z zIFGN$Skc+2rLFVilfRrZIW99UJ6?GL;P{Jumm%14F3MxiJo%)#|K4&O*6PTwM2n&} zE}bu%bYa20l9J5q5{`^G@tR(tBmTYR)AI}OmzHJ;TRu5{l8zTGtT?&pqWs>atKXJn zl%y3aJ;(%d@y$s(5nE1S%XgQqd{?3swk$;krTbaYxyl{wmt+s-otwyYG}B_XFS$Z4 z{{0%H6g~LxOL$I90y^Iz%&F;ZTUV}c$1Skn3vja8l5MeN5!>Q_n)}<5pXM@t2haGN zm6LCs&Yo%6aZvfwrC-nde4)Cyvb?;KAqvNpixzGQ;YKYQwPe&{CUo;WFE6>*yaP3x zm7~v$I63+(v%Y@m*%LBvOpI=cPqnUDCJ>mK+K4YwUtZ#QZR0ckK& zwEms}aWCw+z2oXP#3X9^yY8DSGFv7D?qfSfi6XDxQr(e1eOOX|PpQq+BG-rECtI(v zS)s;|t+FXmV>b!Pmq{I;ibxD`g)>1HeOKfw#qTkbGx(AaE@;BA;>oy=p4I2)*ts|`qSlW9s?e!h~^c0<6P^2oE7D+Y-AoqA~tKyQRIiO)Px5xsJe}_pBCj38_;2xj!)&ukuPU6l& zn1D!BM5_>r_23&l6>k4Rut)s6Wf5z;iFCBIICya(%WKSzQ`&BlIWhFQi1tY#hY&J; zBPVajp>n4bB`?I0fwN4^=H8;?6Qvt6^sw&r>D~LkMc*e%OiNBmkR_Os3gH`i)NlS6 z=zgctf4Ods2;Q(twr1O==5TJYZKe(o?i`J)rYp$fAvT$^a&we9xtS)NX)!<3rFq-7 zJ?*lCp{<*%xI7|nCEZT9TYA$CE?LOF%|vQrR`>o^q5Z;aQ$Z0}3ic{2Bgjez%S$j7 zfSGh1{@0Rs$lB}VUsp)?dl-21_(GGtH>GWs`}ky=kiabi*Y!x6iV-UfWGoqwK2AmG z$H1icY}RQJLmbWygrS8N~0G4O+11aU-AuV{s z+rgk@NoHv&9%(9yfy*n1o|eP^;YR{7U8^L*vX~5dIoIQ~l58ekB0Nem`uR6>que$H zNP!o&DYhxV54_-~@Cz}uyUc%iG;OzLkFsM61aL^heyD)V0{7Ksd;SgH1dv${)_c5& zP035pr=&36-cyr2irFWYWExPV9Z|FLkY|YAo6*zjETMIZ9#;WV4(`Adi{c z--X0JsK?^GfpNywK8I-QFu;(8VR_EM`WZh2`9n}aOkn~7W~+dsnw`HrK-slQqtPej zY8cPMKd0Br>wnHVd{~*At1r+XpQwb4fUt`bdDcsK_5YLI81CyA%VotGLGKM`?L6ut z*czC?x{&cD#?s7UZcAxcbDQiGB0&wcNm1q8^+P{x|1;|xsdPcIQm#3JEMD(YTUcA# zDBs)cyMDbd{Fu$WsT)-va2uF8FdXF00o7#_lOzb&0H_5v)2zGZDhg3w? z)>c;5a->D_=IIY_-aH-GhXXH5It^v9_ZUzN*^PSqH%H!+oZI@eRz%;Egj7b>bQS4I z221F>ohYEEgoBrd3>xMpI*5yW9}m)Z|NP%~upYErX32*O$nrBHfNn?}U5<2y1gOES zz;%k@I_xA%yw)sT>eY^zSuyyJX^B1qh$OYZGz1525-iunB$4BJ39jC$Q#g4JBwjzU zv|fUkmr(E&2VrZvd@=p-yogpxXc7qimk<>Sd*D}%Q_dtMFlC%Cg)1mHrA5y4*;DPkqP<-@NcgNSZy6X z3Cr~laHd#DUmlmPu_O209G|gt553I%2Arn}#zGFUJFShzS zlJ#Qga%`jPC8TvC+c94veR7=KpGfc1@qDB8b1_|SYZQvLqF4v=sVCBV*wSGAT=LHr zoX?Mz_se;n%*I7OKzwks`H)q}DX(_0Zs!ZxM`X3)p%NW~JNpoCA1V2>w&^VFUOAjj zpRU`KQ|Jq|FbVb9AhNtKxtDdP<<$9Iduk69A7zY%g$BgEKSc`G06I&k1A0hZ1t+cF zlw0t>1@Dsul5P7A7ao>lPSdqFZzZ#F)hco$_mzOty%$N?pLr1(SG{`j2VrRZ(V`(A zN^jV?Ii7{LUssuakT@;QBk#Db3>A^lU+igwRKSY$sp=KV%xIzGSevvVz@NJoElO3T ztCD2W_f?;hK^J?==E5B_VBS__#(dsv;0z_?%T`fERzYbwsI*HW5~;#JErKi4L~oBk z(kW6;mD0f~|K!hfI~Lkv`?y4>C&fg|BFked>-lNF7oOrws$5lm3bXPC+!e+%@*jxP zx7Q9R^O5#dt~IWrjx*BynDjt{Z-6XbkLR4zY^%wzEyQAv(mEDvvaas%tjG8PaQj?g6JFwn2r%eJF&Yu@W+WaW`a5234W{oNY^SR@^D#$9$%Vly+phT6MwfgjIWysE>;lxf( z?7rDvvr{R(RZ;+_u!h-0By4W1MxCHZO4Vg1RWVgb>Z(QZMbVMrLCURRsuYBFq&4cI z%);{0^3uk-24s;p6l?3`bq(6Y3Z?XLMM6PfZY%?}#GUL{v7c;Q$Zc2@8nG&CK^Bt8 zmrluKG6z9aWD}h%9~e-yZHrP`v!Xfdq~W#^Pvv`<;Epg5Pb1(np1&j2?;&P|pWc&8 zcRbuSdbv{Qh`?d=kgQ#{gBx{fT-CT!%bP!cxZoC!NJanUyK24PxLM00-8VAx{OC_~ zjcvBfHivhhxA~zk%>O2bc@M5f74fq)6MuWSLHsN`!SZB1iEK`!jt!+_Vd)H^Ljwan zJtyfs54(CE(cL?8I6vP-*qW3ydUPOtzk!NeM?}t^I9Nu-&xaGyZx60LujGg$aBhuH z9yd0+5bP^ha3W}5siT^ znBJmYpkc=dr3G6KpN0lCcplc@KYZBr@Zo#*j&3B zO2Q$cg@S@-&l(8pM=WpzBu=M5Eu*N*qfmCCv zk-l>zHZLJ}OHo{I`;GeJS$Vm|hki!%I>%52E!XT=byx}$ma--=CL=a|X=IQ(NWCmB zA~hm4N|%(*7-F+h^|H*gg2cj%qV#PBb7sD=405~1tc-%JtgOtFg%vrKx!={9bs0(X zXwS&aOw?w;`#uc~iVF8y5|@;vZGax~j>;3)$|{eYKXAF_BxbX@8K+kltBciV{RCpP z!{J8EX4dnuY+(lSUgc_CU`l*iLV7@QVn$*{P*ysAO}+(*RS{(wCLL2z1L0+5aZXL4 zx!jnQotsh0fCYkOKcn-Bay@{gfwmj0wM1h1k|c=UmP+{j4_R*v3O<+D&~5{^lK_6l z%K$Q`V}Qu^${NA)H^>SwzDQ`X8#S`~J`acuiuQ|l^`zo)ar6WEK-#mdeWWrcadkto zT%D4l(jfMqrd;p?SvK#D{0DKvj+~qZB|ML<_m8#CaXEo|lkBtJ1uXZVh#w~@OwLm! zcXXrvS`BAA2^}Vzvt(S*f~X8#Dzt-BHCnAMO_#yEy(rNcbUJwGa?|qUX0U^#<(4P` zUA7caoqz&{J4i6Qgg?AH)G7N49xh=;8=^RPIj^A3UF@sG+0zN3LnXu!)`3WpjF%h_ zxb3}*6YgTsF7IjEzmj*1xg-Qnd=!?~Vkpd5Op>3MfB)Hjt|R^-YplWSuHE``-n%#NTBzUb4Txd1 zi_K9?qe*nv8dvYl`h~kTlXlwf(s5acNIHW;3rovogw#m8h~6a=5RvTd2@Y8YOQrQN zOL`9`xa5>w4Dv%q+WR*M5{)D58Cd$T`hT%Sv19-=C|05?v|m18FdYC%iWPX+yB+=G zSB~fESgNHzz#9jtg-3qBDiIYC{|JY=GqD>`Y*bY4j6oNAR;YeU|Oyq1AblpirOoIMMPTk zC4ni-!>U34J>2>=UC}A{5lnRTWBMWKv5H&MaY5v(trNJuJjBg)4b58R8p{O{>2c^W z!d|OEwbLaoLg0Cc71WTOhp`q7M2PYDb-XXZjJA;NSU_?uo&Pi!UVSZlV#}eGWn6~` zJSf=-@tN`R`1p*p1Z9T@^8Q!GY+1ET2GXR}wd>jTw)%b)NyC^p<7ATI`*bEJv3a|o1t0M!vfI{dm zv3)@o{QJ`w$*Q_F`y&P4c({lZI%NV&Vl=uMwMJd0PFU%Jm7@KXb?t{>>Njf1B7_qB zfC(OzOO|NK;=hSMrWuX=R|M!|()fU6Nt^B5Boo{mcfu~P<&pO#q`)?nB|R@rqwnT} z@>fi{=iR$Qy30#!575m_eMAN-Ed#}dVnay@a>$?|9D%9-cDfketvb33NrKDKJp_?H zzmd)0*$oj-2^+NGGr61f!Vy;bm5RJ1CnYcfNRPWKa0^L?Z=@n6JwWaV7zuiPcX_IH}UZON+LRO_5sMlq&wZg39#@y4S=i0 zg#^;+H-9HR3}jx`U7V;h0pulM#IvH6bIWI^HkGqe$=7!!LPEw!GMN9H4DRVB z_9KI(?QY^>aGqh1=|=3~7m-7e%pR{`M8j-Vh>2l6k;AXuk>3%^LV4N&zseyKPJFi> zRJ3hzZLw`}uhtXhNZYHnS1XBRKwH1PE?H$|#xj91wR2~sxBXYAz zuY(X&1i2$3D~(`87(-Udp*k}b(B9-)}y#>O0yJzIx5G8eo zH}De)Of(jp5u-V)$3O+u3+g;F@Hq&wbgqJrL0ICG9Xe|n5@fN&z^jei4fpeksGcQm z;)l{;%U#}qwaqA*TA-H&j#^H;wGJy^yU+7jIzJ)E#aLC$JBn-{^53(znWd!nSkYwq zf$u!{jD6?rSso-bc$e}da)T}ufobDk2QMH&svkYa zMyn7Z0I_MD&3@+$z3gcX>0WW-huXa*7lXk&OZZ2uH2d@akFocFi{fhAhgZYQZZ^gk zmm#pj&Zw~)V=S>p(b!F5Lu1E=Ac7#hvvgP%SlFfa-ocK&ml!ogi6$l*O;6OACzdnI zS$zK2pn2Z+`G4Q{`+ctLPC4hynRd#3U-xwpZp$Yq-~GbuM8P%;0rP%o;85%dPK|2< z9r3O-A%yrzFUuBRytGiSmEBQc>NZ$12w>1^sjY3k9RFF$B~jY6O%1Xz@G=o4tQoPLH-Xdc zq~s>&8x-On9iN#UBYY;mxova^KXH;i;yp1XCL$@0_X(}4ZYnLTG>PSZ{GR`Smsv5~ zr=br9Rf*nLdyj1AymtC+i_m9h>4mT8>vYC3x|AP2Au4pXm>e0O9L0P2)iyU5RWw<| zs=Ggy$V|!W$ck0(kdb0_WKO7`{6reLjoWN1R7Jk5hSij+7iashS zlHcUrv~Pb+6@q}9(A@Mcl-=>cBzEm!GDED2Dhl1Ig-v)EjASyot23*I9G|n@mmE2R znA6l$KVJk24xlw|K8!8XHkLH8RX+5L?OTSPA*Yn->9uu69-y9@_67zDCJ9MN2>5_}Qf79dn2ecxmbN=8P)}my7``0ohB1rDFs8fU}aav$ITQqfkjw zn5)38nGIlu;^Pw%;>8deT}BNIXu{3r>}-osC?^I6EMbYykGkL5gUg9G$HgXqI}66c zv@lyAp#&LXjoI-z(0(%K0RJxM>5#T^xpC%LJ!U7}DI;v22uDm|^hR?$ED{!TE>f1F z1~(-WmuHB}iQ)CJu`yzVEu)AgF)>C~(OiK( zH!4c6j}oG6*#$J7i8AKs3;2TE+yZ1NB=OAmxJX3?eI7<~F)w@XYwkcuHrm7XSuZ&Vsio+*lA* z%oi6F6eF{oJ%Z`HU&;Y0q#+vm&X%q5QQHJ!4umOxEiK>|ei#$vDh9Y{ftKUK7zlE4}-D2Hvcv!eBv|4sqXm#)fLSvgO2&<(1!H|n@f@QKt z4e1$~7_>jVPn5Q)f;|7RKjjrns!!H^Dh2+omWnTA9r0;Hb7xPy_sTz-HcNkP%FMngI{ijvH+8SzQ9&w}OCV%MdFWa>>x z-8%M$su;&43xL`Dg`0QDtiQ#lyU5^1A{MILzQ4cY5`VI=tRw>-S$bob5n6dhLu!fv)HW)Ool9y=N>pliYIJHOkhLfz{!H4DoH}5cRJ2dmFs`t+ zu&xlReN=5%>n@jm(lWDs(a{aqZD)zkNyv$p6AlX-<~!C?Wz`mO#_p-H0q-gr+Vwdl zt3}eICNv2H5}7s?0#efCZ1O7!QTNy3iaWyqhQ8)xztQZUwgqs8fM?JtJ($U4Gs`pb zjm4QoPGq38A55Yw8ED%tC&-9)GA5+QCu%d<^m1c8!z0m{%(NO~x`a zo|2}1^H_k=TH%bSVLtEAYA9`ga)a$h-c86!%t|&p!PT4rS926QiC=cI=@;$&tIo+n%Q;&>mXaW7*rI zy@hBz4;y6uhAF@Gry#F*A~|qifN88T<&=y2%gYX&(Vh(1=TR=?1^Z=zAi5VV?>;D$ zuBHcf+W)SGI1SGJMEB8fkvcex96IE#*+<7{zDHEJD@27lEy}JA$-+Ikd-n-MQsf)k z{W^uJP4TX;bgXqT$>->0a`}a| zePdUl7W=h7Xs}RqM}SWF`{op z^4`ii)#YznA3V}N@_ex1TOqJ6b8lT`ZNEmNKK2ME*e_C1_AzoM6X`6O zm4_Z>-M7n#;twq`Bc63AFdV5sUoHli z(Ey~Q2U#*gm`cYEqW$~#r^`qrok>2OCH$65sB`tfr|UBp4j_|y3-z3)^~K7cu%1F>p))fT1pfmLYP-DB`aKW7V}G%#fGiG2C{-V zi#fw<%>>aYlb>~QNaqC~kOShoo5^d~ClEPT*os)!#o8q~%Su)VQmE|#htq$p`7D^1 z&`DwU$uqI%`17Z8N={+}(l5nC`86+uykN`(fw=oR;#q>p>L=wxkYV+3}*Up#a&S9Y_LuG?BnmL?Zyna|hEyX%4yuY8!V^prJ6Z zE+&3ZjlHOq0}}9g@=svGMdAl7`h({M5~{R~`;c}}YMZ0A?UdfY%zGz3Z{V{Nhj3=* zhg5|0EhWLALXE^Tq8R1;pMgv9PA9gvB&PTa}!0kDY%!Pa``Iq#% zw7k4bWy(lQ#YC)x&IB5@IF{}KPM%uY+W`fFC1Pzz^Og4YzG>|T$VfT9ZRCM=4LNCj zHi+9~++^C4U3}M(4z8#6H%2~Pu+-77(Z4yk6%Lmr+X!S#z?AnEX^nTX{UQCv1zw51 z_LcUlyla(Lgh_Szdy03LwmL0sW2Y@4@R-WZLUZkvWwmGydVpr52r`vTP=KhJ! z=7K%_z5KivoOK)tv9RfMFe1)gRusRxC1F$2CW8}P$Mcn>)eLOgTd-aQsi?bjhYR|2 z+u03ALDVze5s>?>2Ua#N&O1U99J9T>GPd#CyiyXp#UnIfam-5Zts9)+%Nf66^|qx! zA2^YyDNLMSlCO`}$K-2)Vr%4-@()^;9sngW67AY>+~<6Z(;Aw{BsMlDOE0N2vl_)U zB=LOS@rGRokcN&waJ1!Y`KL}a@>|AIYpQF|HYC->L8&(CTgH}#KzGdXTH~n!{yUKd zpY?LAXsv3lZMeM5@%N|1{stLb7k<}qk9l9_KBLNd4fZ=C0_E@_VTGk$rJlv^`CFVO z`7)LB^WLAKoe}+h;C$h>Z`78Et)U)HXT6wHd|8Ww0pk z65Aaz)mVQAitn(mEPRT&P6wI!_z$$-sj`2jFJ?!J;QO3>kvLu;pFvNn>kbqNL%CCn zvNyUdk8@piDdB)DSJ!?t@093)+2rBC{VSJ-xPSa{#rD$}!YEFawH_16`~LLRHlq3J;DOI8gbd}5 z;+WcIZBy2srUI;eSib4*MGzAF{5@g!?2Zj>77iWCFFJsbdF6TA1TLdG4UM_vtgK9{ zPN@{2UKU){jlvmcDJ9_Az~#4GT{X<39$~=2r9igH=`81!V$#RS6pT72GT?9-Kp0!jKrqyLDFHaT>12N2&tX+v4zxs1peo-)K;{s#9__3b z{Bk~;-|k4iR&e9q3!6D-VD8U9{ZM%I^ZPMlfpkpfCU0LhZmh?N+ut{R^6Txkxh?|w z*RMIhIWt0B_{QZQ7Ikx24Z=Ws(cmjo{A-(-to%4o|G`S_@^ZIBz5-bGdw9&8LwjlI zCi3x8n6bBzQP)YBpt0AJR@=}w$w=*~`toBiEKY8GL^$%Ewmz{gwpOUks>!agsL0i> zDO~cwwDyBq$%^N0ziFR9{aMpS!-fr7+Y{ybG`HmS&|GAt2k4%Iw!7=M@H3*XofkE6 z3aQ5(WnF!8Jr4`!bfqRme>(NF8JamEtZ9eQ$49Ffpr1ZM3FA3ks>~=Y%P7kOsRfU8 z$*J^_QnP#momoxaBVHFi$*Dgn*gBl;Lb&V8u1%e?WcIY_=jYrMG#mPTeeTQaV(-K1 zpMZgnk(7UTE`8MZ?4y;BI(3gUUu%A|-tJtOXuq{%BxfBeaJUoko~~=r0zMl_h{Q5RZ!FJ=zRzoee%N( zPekc;Jx8w70#ZP))2{$^#P6tzQTrzg`8yk9Yx3b@6(xIL|`(=q!`i+2EmY& zY)IlgQUk-i6IEM0Vj`BIFC~YQZrmlqNS<##e zijUmzKSm`jJ$?CN>o-leO_`2}D>fL#odpNp+QXkICB0k8nD>bAF42I3EYX}^RZ?54 zJ+<@1j&{gSts*fi$Okm$Pp6hiBg)4DU_lk(s|Sj7$`lMeqv(g)kZ}D9Fam@JhpqS3 zh8e@N!-02fFb7-vlLOC(VA9u}7r5mf9+fJQ6jlVVzSHT)#%jC9VtA|J1t~UI` zRu6&drA#^Pa@XZZcd8Bl<+QKKX}5Y{$MdwOcFAc=WgU!zAJQvuF`+kqlis9NZ~&}< z%Vi>ZV2$`b=%BKQh6(%STG%gqWrZ=lQj9zje;f>KUtp-3L+)2q8qmB*KiST4pU2K7-MD54`My$OH^E7lCr--x$06?Z9 z&37l@P|~S1_u*g?n9tSZfll)sc(w);@4+ODCyRArmrUD!Sxp~<6j^hB8uk-ckjH@Y z4eDfY1X(R$@rRzoMm3NHUG~>>P$5&3SJ9Z-BOt90>4QIw^eq`H)so(QaVIjYuv<*>vJ%o4PO?Y?g z*zB>qN7QDY@elVN^ATHv(*|wT8W5$VhhtAKq(n!j#qeE=SWPLGGNMI8Zdy*RR_mX~*cNM~-=m2mKQ0+iSF4r#~-tQ{OPBJA9H2Jr6`U z1e@UU2<+@2f%bRg&|nTg1bgzB#j<5TkROsg*M%)Wj6lp5djqjI5J>%g&#(h4)CznoZp1{9|r$uDqn}9IP{{HLclK`p9`weAo^( z8IPTRAbwSS?+^0wnd3p8yG0`JG~hipYst$9DpKS7d47B^TUpWOj{LM2W5nPjEj}&Y zkPwe^l()3)K3;JKPH!ZarAe)27;SW7UJ03HL@B}IHOblT2pMI%WP%J6Jg=G#>GRIH zT!B}_R<9^(w|?~K^$5K5*9S)KiQdy$uy{Uu(y zR9&66&%fG9<39Iu#Hl4S?*HQQ^U}(r^G5&T7~QQa7!#cqk{A8UXmDRa;fgn#$y_K@ z(s1s%`rtc1JI3S(r^Q5*-*i8};#Ch-^^bIGf z&HI4ffQnz>zkXum9$ZVOxzcw=QhUrx5m1G?%6}`!NOA}x^o6oY(f`YTO=mrvu7Rt7 zo02+Ksih9;x(d|mI!%INyc%&Xk2y)hw$<0SiG;J|g1^_Je#b5Wh*jIZRcg&e#s8h{ z2bb|^Ynu~M$mCfd2;&`Qlo zQ-e-AU?(4f#Ua`R$)45t4edTMT;#xu$-t_POT==CblCe@UGaud8i zvyKDk%}>|+0J_|75lyw~*yOZTt89a81050M6fF&u1|2(^c5Br!r&UL>XSHphZIB}! zPKEp6vO zhgbd$x}}0LrimHep2@Bug&{@3Wyu*S_=J`ESk@ZoOUcwN2=N7dRMvOl2yfhtyq)*i zC%e{DrPwt}NhX-MrX!xmS8Pp4l0Pcz0_DB;zZnB@+&9=U@4q)f>{_5qFvXh^Oe=PI zu54O!X)5VGoP0E$uId_Vo!n1P?yC}w@FKsdElDm+E=*C;0YFW<&fhGMesSru8J#emS8!Tlt>8&d3XY?4CSrcC#R-m_l*rVb{6;`J@&i1$}=l%XU4YY7i1Qi+VhhhsjS1Pg6nQ);;#dA z_wjtQDhRLvL+P9SYqfWfQOr_`qq{`JUG}UGw%_Zl)%FE0% zm*!i_Q>(#-2+)N+KB;h-OosafLpu%qt6OS7_PijN5b{o4=(X+9YumG(_I7DqShv~( zv?rVCE%0<%SQz;Jzm`}HqeluLNV_^XvIVj>@Q~sV&s>#zbq-*Fm+yaeS!P9rwzFfg z`dJ5#C$|aCRt2j`G|3(tr6zR4vkr1l2RZ;9d4}O*gJciiY>)lU%4YjJotAvA1}5r$ zwMVIat-Cw5_gn2p0PCp{NhPV`s_<|Qtg?_U^^<;d=6O1l$FyqZ;{N@}U0sz>`1B#X zFhfX>Aq70CA=O+Z`ow`%W+Vq3ZZ56-lV(EGfmRO1%3Klri1G2-00QmFN+B0xE>Cir zM~s>{9sTYkF&UA5F#J~Gu$BKgEbvuXwjQvmJ>}_BTMu+6*nopqn$4Lea6Y<`2$BxJ z8>DeAlXT3Sut7{h=V<18lT6$c^jMKH;ALs|DH649oN>@Lv5a!*utlQ+0)ETy5H6 zHweRXtNqX5deZ+TgMXjBS*hVNl#Z!YGF_i5LC38s|v z)R_47F>aA=UL#jem^pXy^kHsP5imJyV)FY&m2u@}!)87pB03;N45M~o^rh}^yKs5g zPUV|i5?IHROtz)2x+PmoFFZ~D%q(SEvargxvjl{x=&EmD77MOtd=Y&C#!Apcv~uLF z_dql;;IvRPZ)oWT-u4H(W!nySh>1lycg|pTBvozoRN`j6pJ37CQl1)s4nI0 zYr4!|xL`0|5bqlA20%Xx3Q{ENz!h>jvHmnD+2B~ zXXU?T%$>3wu9>uiCT}uQh&de}5b16-I(O(TVwPlvv`gkVGxt}FNm**E|7|mW}kx1xyubs3w(V2d|HFg?GXQ1chGgFHWi3EW*nVqRJqJ5 zD%m39^{db`{wLewKjROdC_PXYT)v=D{Gf5-apSLO!Hop6C=>ZhC!(U8Md`gF0Q2Mn zz0F2`l?0ZK0Qz29D4&)P?mJbWGg)Gg?lAj{8}jz@2roudYR49})POgYPcF!B_P#yw zu6I){fX-`ktVg;%$G3>`)A~;vY8t+)Yx!kQXl3Z(hHH&qHZ(L`PTliGedBj^d+IMY zd|TfhotsfuMs8^m?u}U9`N-L>iKC@-N2+ZU*hqG$Tqh3m8NzFNo>C}ii;NP-liQ4M z{EFRK9zO7Ky)8Bez)?osj5Yz@i}hf(SZ|aBklwhdnya|ew;wbhAf$x=Y)+eDTT?wR z3~Mbzhc=v^C|d=6lBIWO3E82thIMV_!c&S9AU*)Lzl`D(Wkonws7#6m_#iQ#iA*Uo zDYK%p@)=VI8)N%`>&A4T_cZV+DH&`xft>uMjk8NOF@~g+{47=z*V9Fj4nzfS#JKeN z$IxpKmQwl5Bt|o!r(WSqU;CU3C=9I;G4R+999_y!qWFRu!ZC zaJl?`ilGYs2)X=z;M*i)-sfP=Ga4aMi+?gB9)475SOazi2pA*kot`G6LvSvsMpgF@ z`pMK@17!+5gF%HK17wrr^8_g*&Jj7})B-Z&5*Xy-@q(Pl_l{Vv3ich~ILC?=;RCu;|@0jA=(QoIOAm|vJ> z$rTHNn5c-*q!78zihi4S)EyAzy?yrA)$b9=SOW$u_fOBf>|Ap(-!O~YSJ%)ECeI!{dzKX>=?lcD0LHA>!_KDB<9!GS z58t`7IJ`>ChhjjkS%wcO6a@h|0DfblqLNXe1Vtacn=kGHNuA5#8Y=X-H*wwf#;0N5 zzJ}*_#UkRapaS}adF)(ecc#CI$jO`fWLXR;S#rIfS2;8mRhA3tGkpi)>z~)S&+{5% zcp`Go%ManVJ}-Y)8Sc78yo&PsC=~UyHx6*Lj7x|17v4ZT#0D^S4pjisWdwpsB?GCt zAJtU(QN_cHhgj1CjGo<#1{Gw$(z^e84McK$y7%_Pa=NiwQcQj`($dp=4FWzZ-6(YD zmEWFpqYCQ)aN3;hetzCwUXp&iavXE?ATY@X4!%F*tG;PZE|USDHC*0Lww05dQtRM) z^1*@2mblww#3jvF|8^l)tZBH4ClyW6je%uCS@6#6jeI!uD`xlCnoAI$h%}Yu`Hf9l zXZEklNcobYDX4gp5Hh%w-Ct3HcG7O5i?emv0&aECTKDaOrk|t2Z~IpLDqi047PB}m16jnzzB8x&_UtU&QkeC;3 z786X-CVz|Sql)0FL)udZ_nmKRiSe%!wz)C5S^CoO2y+PU8xj#5mK(b#O8m;NB4CA< zG>+z?b_68(@+kIjC zt9x{1{T@0`WV&<#_S10>RkkW+*RR%8Zph@xL*zD7KVha+iFtl)f^9D3?*?X!6Q3CE4sSnm93W)M){^%gW{5 zXRjad_+X`<*Xmdi%(jZhv>(D#t?zMPExs^QaF$f;%*Bglh|aW^a>n^Z9fGq`Vmr=X zfcHUaAXRN1=bBHiJ-zPq$ET0LlD+!OsUOFZVF_oJ5fxP-U}P)VN?p#lo!~yjOAR@}bg8mmFZbL zUVa1750{CqvhuS<@QuyC{8@F#=jJO*KR^7`^|WU8EYWM_FXgE1A6z?89Ha_Hs<%~g zbnGcI;4~UReNQ`;st+A-6jIAyPGvNT1V=^B0p;HtxIdpV5THTW{b&v>$O<%33jZ*D zprBEt^hA@QnE1u_Y(+_2fJpXda(=;xv!2W%A>K2E;*(p-vWjGXkv77exwCuUgMDwoqB@E>v!VGP|qt$=_K9FeZHm~JY$MJE^xI$QUUCf}%>t00UeQ)wF_SlkBU{8qtPlnn9 zsUhWJ1#wr_wI-no zq?dIv+p+kQe;(wIW{Ngm`3-^E#CvQ7Uf}-yT}Gp%cARBT7nL5DXf=Ca_<{S3RmIlS zCWn=Y71*UxbnkKr!sY3yP`M}+CCz&>ckv{htwbT%FW*x--H0Tz8#L$h4!!aeZEKL!(xzu{}XVwvqYg=^1ebL~K>W zTWOnS4d&+4sw*sJC$DqFflht*ytbk=qgWuXoTU!zs*O7ljL(rN-!9Pxhb2b{wC@tq zmp#{BaS7pwh$h1Wjei?9oubU@Bif3R47lIbXJIv5wc$n1n@iy{OhV4rmyp-lrd`=} zr6QeVU5eu_W+_V+GefBbrX$1!4rfQvZOjh#V|~-1-!4XeZV=CZpd7Vn?K|W4uKP*6 z-u=#L*_!Tm&JCd_6nEK0FF#X@e`V#kgneXaA$b{wbbHC2yw&LqGzumJnn-JuRW0?> z)duf6x@Xr>0r2o)2#7i0p1w^8V-u2+6A(JkugS=qXv@1Gl1FqH64wRqIwB`_?yQIJ z{g{sSWb}sEcs<1G$Qd07?#2JWNOL~^*>%Tt2gMV-J@o)aPe)qxdmc(t9 zA~~m)hNp8WX{o6Q$1>aOm_%q?B=FPNgv6}uysN+E7K#bw?~!1WHajajTe!~VSQ6qg z#CAIT33-Rf%FNEp=D%jMvl0?Ssn1cl8Y(6sH8C-spTuhBp(42u;6z0hYCuV1h#`Me5I3~-OWy<2e!qF1r z;nGx5o;zjPmbIP_WnnMrzDCVProAQWxLI^ohD!PJs6vXli%_{S4}Lp@dfdaM*OEWJ zB+*An?k+O?Jg8wHLfi<`Oi$1O*=tTbc4ptRzRGk=oIqo?@i)Up!H;t}hx8+CF7nGaQEdo_5lfwfOw(zSwa?1S09aWKg z&T5J8hsxr=51C7FZd^G-`FnEUnlqOk3vUna;TInWY2x#AI7qzSQ06RS_U5-#?B^{O zLn`Q!MddDpFk;tm+jgboP13p1A#*pm3F|hx#%|?<12VG%MLI%Bhx;>DCnYWzab(SF zncZ!>OAhddcZGY_iVg0CA5GEPJjq|2o2Q2x#>@6@o^9>zt*!X;bQ3|bY31~WZH5Ga z8rckQOHfg?3MEAslqJ^lM-Jqc?GlRyGX7f^M=s=NFE81(Rn(NLHtr3+^u3n6b@O*( zfAMJ0#%7^uW6@$4#3Eb8Er{x(mT$?*;ELeBR?D~F5?4?uvkq1lPV+@qW7iCDZyCXM z&XWGTW*5TCC0Ag5U)HH?ja`3n57b1d>x>3XFE`0twr+XekJc81T@E@1t6w30`CezYOESE;Fuu!J)6s+O7x}Sju0ET4qV(z^mSEN zDocj};`%@Je^L9p&Ws=Tys~m#9kbQXtLX$z#XYdw!PFM7>q{oV6{0zz`ChVsOk=Xn z>beHd_e&t;h7;v`VsV&^RjccCdA)n>#jb5+cDz7eVG(~6C(c%WK%M>GN7$@0Or?l61Dq7vXt&6#J3bI* zD*=tiW$n@v^)G7DLy6eHyw;%rM{K~S3WTkjs5=Op`;(v(1hJldJI4ays}pgkjcVb4 zy#AtG!mBz|a1j`7dJ)b#2#~Igu0dQ^<+ZSa{5T#1mqe=wv^;IUhS%HGz)%b7_t;Q_6ue!g>4#Z3{prwWXP znWgXxNS#KL!JLxel$ny0oy1c$n~)F-MI!yO)KKQms*%U&%RH^5J7MU#MkC2<2p`>! zE2y~f%|$W8E7!L)NafjhH0)x5NoFxxng!_a%jA+AFK-XFYqCuZ@JOXIgR$`IU{iB5 z0*2g|2GAhKHy;sJ?F2aZ)?ai^j|bQu+8#0i0nyvHX{no1HlBkL6aGVnxUnrw`BhaS zfYuKm4|oD$T(b3FIw#~00yeuZ>0=;na^X(SbiH#YWJnR$&Pp9Xe7GX+;yKRb8EUZz zpyJi*g0_2#U43mgn8nMz-kYMOQ*p-zlK1XhYdH(HcZ5U|5bJ(JhN`L#mjgxf$Ar({ z5uWvbhGK(asnh21)L#`C7aZl!LvHHt>a8MZ+J?|dMCR-vt3f-kJ5exPr9JE4y7BQ} z@U6jAZRtTas_p$EfEnQ=R=0|Ls>aVseq~Uo&o<4U(-{Lq!{t((LK&!Ezk*ln|q z&?&91cBHpXSSY!IwH|-}{ku?Rl84vwcx7ori`csFc>ACHgA?SO4lDbQw?E+jJdTyt zfA$=A^V}!;v{r;3=V3JO+{fL}Nfw6}U%iPF4hd=vn?3EY;kwyeZ5@oQW3LW@;9&oh zwUS^A)pFJh8R4>xtoQ+MgeX!f?c${UwgZg3`U76AZCV6&T+?+~K(!&4iug-r1H^~t zvc8eqg3Cn+M7(O-V%q`?a+G}YZMST<eKbYMH`QJ@9{KFOM8x*_a20e2yEhDGl@)BCf%YTUmV{v&=Rc^J@1oBqU1|N5CPmtfZEF2p077vizC_p1O zgF1UA8sF6<;5$s2R(~zhgx?<81ah6n#hDC8&l<9lj`@jBIV`%Ae^BgqOO=`(UzgP_ zT{pm)Q9r_|ARoZaXEL(Ii`gEj<^x8()g|xr+k+lz6zXlQn>SQuU_Y$ah?K$A3 z2C7M`44I&$B z>{hfO5=$Oa!|gvur@5iGW&ju@v1&lX4yn=eBlPrZ^@fH<-ul0VMwZ>>bF{+vb8W+WtAI zKMo6U?Lww?;mk5{I^58&QMcUB~-ZgaMe$7Wvh^x0u{ zvrpUJZ1EaMOB%9jDjNCD;cR0~kWZF)4a6oiSdw782=)`8fuXVP3@Wd!tthV%;g_u~ z5B3wKfnD3UTS=dUeJc!*Rx@NA90&L4?>zmTHjkj=LdAi$)lArwgpVd^Z4YsKPRXN@ zQ)p4q%rv0Gbs?9?^zVtw_n5X^A}&2}Cexi6Co&x`RJ+xcJM6w^jnK7}UE{uG?b_X2 zj)>N!?2+Aj4uk*S0T`=8^dO})2B70UWD!*go&B(P_mRWyyVr=%yx7Ro@n_C!0oghP z*OZM!%K|mPnk$88{ZOL&nzg&#kBFUKY@w@p*;?7Q9p1La z#@JZf>LpoAb1}hml(Vi~BWEQ`Sh^eIlD%{_xywtdB}QVU)#nn=>Q9S^fg z3uM6=zQOG6KacV@#%Gd9U&bK*Lnwr`=vz}-6Ly9M1_t@ZHpJBH>s9n%r#)Ah*HnAr z99`g^FQ7es#H0uKWdy(+sR|EEjgJ!D{{pz?>c6y8yVAJY_QSQe{-B%Z)d-fL%B6wY zu<#%_8Tz`+1no~n2mB~{=m7o5ooKoJDHs;1$NF%;n5gBeF7MePgw_OChg7RVLZZWc z&>{odrXh+iFQ4py^iXQHkY8lT$P+W)szY!X8?Va9t}uSG_2fnEpEvG(eMYD&Z_01Z zYsqgbtf@&YOD>HrQsJBnV&Y7p{BU|B3IO4>(ma!xlUrqki<}|5eP?_xwr@6!0kU|k z8+_>s+Do8zgQ)!yidK9JM6g)$@l-LoIi|Hut7#ZVS5dc+$sr!KMVu6Xf{Y0x#yZq+*4I-YXVB1K0x(N@r(Xk*}?#FA!rO+NL zrwqoKyh?xEPhSzuK>^tT{G`EyCV3aTOqyWGTA8 z6_C{14w_B3v-r`2tYkECeaTuQRdZA0w=bFlGL{g4c9mqz!EdjBzJK-jY!Tl10RW`p zb@3<_rF4g>@m}5OLjRNQvjeNgLr`UdoUYgNbO39;g0Qw|`tk>pgqV<^`0!}e+7IZV zu;*{%h0;SGieUx8=BQHDN4KL;#|kYe&nGWmgu;1oMNUb+>d-}Up_u&6li$gq@O7Vx z#WCgj{BYI92?gjA%eBN6<6mb<0pC1=*I2YRft`SV;S2*YtpCs7OPzt8136NQ5H){V zE7-OSg*X4?LmlQw)k+MldqenoxM)jw2sA)vH*x$>^)oxnA+a5M1X^vifP+KkjDO}j z5IQ^XQ)6iAPikQ$C0oN2-wjHV{?Dmk5?ILBB z+si_l1hSrODlKagZP8T4MJ6Of39f8pLUy4@!j;__h9f=smu@*5nfPLB2#OiWdWB-E zD;w3FHbZ&!$l)&q;=mqk4)rP#n@gHY5Awu`y?S`oaRL2iB29 zFi+%X<>ZK@nYA595Z_X=mg&6VOlNV^+2Wg*=BB2A{4?39zk_Wv`@to06wJ&fgdNkK zHXkm@kerGDmb>JhqcojeKtE-kO>*NBvl24nGLo|#$&b>@vefod#v9`wvQvpxXEM1+ zzgjq-vHj{`$V|lt4b*H$x%jq@}WbFYjlI<-U0$Dx< zFYi%$fnEY(lY0gSiYN%w?@~(PHgFocG2>aOx8%%8J*C$ec+As;j3nyVWyd_RikwYh z>rFpJ#K3%Mvs`PF!HIa=0BQ!1KnoEnQ#{~AuA~p>|GPUp@~xr;k5 zhkq7_a0Q-x3TAUH85j3i*cHEvHXl0Lrn0H&+csZS=kX=ncJjJA>9d}^dg5;DgMx>k z(Hla8Fyk0ZYyK|$bJvfjNw4+fH6+>IZQrsd6C#PO(;b>ea=5a_&spj2Y!}LXhgr_d zLv#`d#Hi@|9{AY40f0=bqdX5uo0;n-(>F!PHH~tH`Pan$bgR7WJ5l3z7E^SG79z+b zJ#VZX{FnIGUj)ot19)6lhiyyA>&WB&{kNgN@fyD_f$Zim9)8txCRK?Y=zd;pr8*w$ z=ngAqQ5U2neLAz4<4{R=swJ=Sn4rDkHvDh#{@>({cG8bWyXE8u$#0Cgo@FstsS9;D z4niZ1-`*B(vynPxpvR`nY^N_#Z?1_t@`!hK+VUYCArcnwtpkrpuS#OaqqllxO~1$D zUw;$!C>fX`UzK;rCTF|fLVA#$ux70L<;DNy#Ef3(J2Hv$3k>uV-e&y*D{DpTPGwzX zWv%cVTU!|jS<78rJIMl_R7XBi(}T7;d3nb3>*LN9e&t1?P2>a z55gWM${NJ+Yl!kNVJDDv7-0b?g&{lEhlk)tSzrXSr|Mz_Fv;#R5^Ul#{e^ zlw~!`H?IByR|QB>OkQ;4^{L!05~}m~hNU57w+>|Y|Bo-*uTwY#X96UOZx_t^`{UMu zWCI@;=)3jD78f{|q}RD0{;K%m-2RZ@6N1kYCWUPY`XF~J?>#GVy*LAas~&Wc7A*52 z^FCai)3j1({FKRHH3cnaq4#PA3pI>>qV10x{!@Cm=lYg;$IFkM67kh@m5Mn*XonLcgkzjkDUA%hD zVv)Yvl|`MeJ}#%Bi&%I zG>SGr7_4=+pLxv*S_6OLdRj;8U?y4u>n#jFw=k}GLo6xU-&U}CQPM0 z>8PdDnWvlSIGE_YL`@7#MMJQ-UXV&3bnTUZ9NmImbQCJF8esiFbOlb?5wv9|VduK3 z1KS+n$5IcqvQn*C`753rKmrqWQ0^f^bWj_yb!^Zfd8!Vn!xJK6VjzAAhEXt7k$Ro< zx{is-ODHPVy6B3F5@PZM%}Q7-K}c~(DVK3biK+~i`s%Wac`{E9dqZIjm|p93GPwlt zL>L3P!IG0*BN?)!A2cbg`Hb}=w(Eu*JoP6__F>9T3R!8pGX+)aNh^}wz^fS}n?g3o z`)XOT0X6_K$bojR7b1^r6Og%(i(^79A+Sm6*^tn<@EDoS&Jr4s?pYq_)ai;5Xmnn2 zLWvykm!Btgx^`O1E7My;tDNLvrUj354>H6ZC)0!AamD}cC1|$5R3ZCO@be9#^6WK+ zvzqL)&H!U`ngM4gPMmlfqKN-LevnB{HF`8IeYO8ygljt;2A|J@v$w%qD5$af_U+pf zfBxA=hw?OOvz)CrcXNkz&-ebXT@xowyoD5@Ve&Ocd;eKwYs8VwplX>7puq{HCT$+> zu*PtZ*rx!+{2Vu)HW2Jwn#5UHJHgV~OEyPEtf};L0*K`^2KQ{?!tNq*W^&=(HDpkO z=e1NxL!e^EY0?JbInfyE;Ti@KT|NrFXW?X6n0sL}g7FAKnLS9y1L^ATFG(E^c%Y`K z7v95mG7cuH5t8dY`B}TfG)XLH0C5>)J>!!yl4De}cE-4lrd%6&Wg{QMZft`YiQ`Ad zoW8nKgd}fDqB#{hF$POFO>8TbGjAx^ zB%suvsUJf>8oeDf74u1??z!Pl=3Kj{-h)>T&YS1PzdF5UyWUyVC8cmdm?sQFOvJL* zA*CZDCT{^fjEf_{#b?xm+3@g$m>5hL!RV%`)6ahVkEJe)_4Wz!P7*gKG@2$1J*OeYgXp0;Q!lv_XR9*Y+GGJ8=3Vj z2I74mi&y(G8V~)TQH!Xqh`yylMJqrPHwU9{uP7C&L7Kuq9I4+u%0@!38Qo}C-r$u^)Df^ zYJ}ASLh5qpBPkWK;;)4Z2r4MoL+Q(o4z`6ce)0aHzC7_%@9;0Jg(q;Sb<}Ly!uTfa z3;{ZbVRK{53F!u_o$XJ@n7pFIBEG07D=$y9z9ijGPd8`h%P#x-L7RkykaEnSavui4fYcrgx(`%w~1L0lW=_oPm$#0K6CQ2<# zcDPV@i0ozV<`7Wtb-HroH#iom=wDj|TIqu>Bp`@Z`$HZu5>!HGyi@>51^Pms6)LR| zsS6~5%2_%ZNb=bZ-7|~BZ1oy7LTGwGd;H0*d;5q=Rc?-`2;x6tgZ1$-m^X_{ zsBSn#4E$KCyHCU=VqTKo9L>*RgCc^0&Eh_)x;5hQM=H8>B*;@%{vW#D10ag4Z5sw< zcGpcF+p-3B*%?jj-H2Ud?_IHCK|rNT?;REvmbS3;4uT4(s9?i_(ZqsX)WpQZ5>2AU z_!#4vIp@Bw`?_eLip-I3kt1B+3NJIXV%O7Ezp^y5 zWBn*ZYq3v3jx#qvJ_|_~kDh3#r{J963=*aYHOVrP8R#l)$`b>!z)F(WNQ4y>Cd@vul}YL+oiUJbO3=>=<{-#^Peo zH)uI<$lElEw>FZFwm7`CF|&oyx{Q~#S7YfBkeMEGD};5^-#RU9p)6TNVWWK;LfY$ zt>!DLdD)-cxoBqKR5gNgV(Jneh+ngx?7w&V-i9ZxzsAT~FmRnZv+N*HTyI~#{fabe zuHGfcpBO^3h(f&gI6d*xI|V7}mbfDyX3;eM*t|mC_U?&h^c~8apgj%N0hc{4IGsip zKg){rlD`I6;cPRNcHXyf!L-T)*t_5mS{+EgMZ(W+ax?4+O(h0coWnMi(YzGDNCRdue3FKaJw1HfAk!_Jn6lWe0D=F?q-M!N?R751x z$!9yr@Cu?mhz!` zQ_Tz9^2IZ7%R3*3A0D-dL8GZN$__5(UcCJpcev#q?(lgHh#*}>f~wEt7#+-*Htqjm z6ux}`&~`tvPm`OgFOABx#*m>e!nkh#x1rF%Nd0ZDOqOjum2ltLiYCaGOcJ$9{#(Ts zvKd_(^nf>$Jk8HPGq}IDFkH5xlKOc!C{C5{rnk!RfZ#1B6`nHk#u-fOmE;!{IYs>; z=GIWlF7C(xn}Qf`!!!9Ak!5<(#$!LC zTDDEw9U(?ElF-`z%SL*OmYV1h=aUOOOersI)qo+?PFzb*Efl zEjcL$d5|kAMbK%JsHh7+&Lq=+IwRjpO@EN^u5HsT=qG0}j`_?1tR`SK6tzVt3ccmM5co6Fow>ZLm$!5iE}PKW=Zd-zyK3&sed`_ZzFmT5Q)Ao6;XJ8@QIao7}12p%J~Mo zu|?qIe1xazpIP2$Q6zr}`-L=7^lt$43DbzlshzX``=>a{0SU=VVto11+#jebXjmYM zUM}CJ!C;7@i}a3Y(Y=z)({S)5zLQS)Aa8pZ&!e612aQ{@NZ!#({gnh@tPTzFleDaw zQ9E88799_2V?MMqCj*nOQoKbfL4bbB8#BEEQl-ID+;lzzW5j zcgC+WvTnbssjRB5mQ4>v^YYipP9HX8Gwr3Oy@s5)KMW^ZP>_NeJJ@-gg{k`C>e>+iu71e_ZvYbDd}Dw$lt*(9*W&@JD6>|t_2#} zD$2(68~6Cnml^AJGj;cR4g8RglZ-C`(MJFJ#K-1n})As11 z29J1yQfS~YI61>NNce`12C&n27Pj(6z7;Z;6yC*GIt~A8+waO05b~z5LKY4wGa@1@ zOzj=z?~4qL6sc$V&OH$TZ4us4-2vNQfDtT3Vcjib7pKtmu zT?IBR{$I$%7vqU5aFP&kP1}9?%=*jz#BEb^%^61oI|m(gKIYb#e&q1En@4uuBlbsr zJWrN<|HG5sPn+*I+=qAaUv;rHX%kqB>Qdkcg^+5_Szd;CTk+*%D|%szx^^^_LY|O8oN;Cu+nQ; z5xXUKPIJgXnN8caKIKPuerp#mTdAd;i@)-^RKy<7z13WNP-gOi+SZ?srwkrEZc4v? zf+0#Dkq})RUKC!KQIuSONRS~sDJ(8DH!wFaTUM;ikIP`A4FQQE zA%SUu`e1MuM8!wN%2F!zmAh3LnJFn5+|``hCyMT6>`tkQ-xqy)+g_(aUAb?Kx53*G z?57QqB_P929h&5o5D^B1xGq^2l!~fSvoo^|Iq9YQ_h*5C5HiMTDgf<~JaH%WN$HW} zC(mR)iMtlt;(gEVut)jE;Kc1oA-Yvzv9e?_b!fDi*{<+)poZN3bnQ0_F3=p}L;n*% z4=$HM6s513S!?Kn@S9#kV~4oeZe8uQZ2RV|n>Jg0nRPbj%Y>al?!KO2c5KG&lX)e3 zrH2^9jJmIqiV_cREcOVrbM~GQw+JNO;^NqaS+*zE%RW2;N47i*ZcUOQ*#;RG$%)X| zRUJvHjVp1>NzB$7q8J5jAI3#r@{?;G#! zsSDU1=HL|taY6H*$R^Qx>AelUg)?q%xf%tGSccx9_SO6OsiKULnUQJ18G-shT}W|Y zdX!ccmyi$Qp-}EKn`1W7EG#Q5HD0UL>ci7R!^0xNqJkqbBK3*dgm^

zA)4ApBHI0o=#zcPGS z;Z&!ro%w+kGBS6KGCVvbHIxgznSHPNtSni2yrej@II|?(+Ig1ml-NnKwsp?RQ^}|F zO}gZTzErxxGax!XBe5dpTEex+YhsT70Ytaq)>Q!VItrMO57SX_GJ&RFEXQ;dM}pfG z%CwLi`bm)1A@Wn5V`+F!62yc`u*X{|xAnJ@ft#TAO8dxuN%m!a+1X@J=KkBMxAk|B z4J=Lf$f9FIV`YFDu2ddRJCS-E*~8M4S`u4+j2P+A0(Gu7q4udQ#fn z^u1|&(+vJuc&TN$IOfr2^-D&yG(}gH)xhW z1L^au(#*n~q+;2Gc9}9_;exFT(~!+7W-QG~8+dWkofw3VW)O=Xe8sm7IW}L0H4P~n zhbobRk`&9Pk?G3V@~Ena-FRLs@H!=()}Kx}4Jab)24o^C4V8IW1(^j=xuMx9kf2UU z!=~BkIq6v$I7M?iv$9Uv8}otWv+2}k8?{3C82S@sR zM>JQ-kfTR~8^ex8Wa;$!thDBWvn6LL$Vdmm&LlQdgI4yf z(Y|p3)=_SeTXfrGyp6wd)9iuE=jayd795MXCW9vxY;I+bPyKeT@W$=+QH0jvjq?*7N7BtP1uUhKU2ONN>MIOxt0$MRYHGsf88a>kP!SoAn0w;bdwSIKH&eZG5rSRI(%=iaN$FRYKKv!9f7%q7{0*GQM%&{vh!d@VV zfPI*uB6wDn;`W|UNT_mMf#qd-8TLXi>r&5rp$as=jAj*)>4}|Z^ry}IR|v<(n+<1OR4D61r~_$K1@K4claWM_vn`DTi;Z|G_zd%>R1miu|hQ@}*$BTX^tN3{Q*2+i8MoIJCn)-T9+yPTxUvsxvq{HDiA^NnC^nE~-7`%bt?wo1x zU9tnAP5RJ8DzA7 z&bYa>r;7G`JeTy(VILZ zF(rjSW!xvizH`Ir&!d8=|gyfYv4Y};Bl%7xBm^uJ|jQY@+M|JV$E zSU}!Ivmkmn5$P@@7QOW?CQuUMQAXp8Uy9$Ok+FlidCPV?2I&qRmL|J@W^61PVTkxB zS2Q4!d){-KC#WaPT|2{@6Qah*`6x-rnqynf1!Ls-r|=H`+y!!scE-yU6=pl+!aE!0 zBgwgvW5-I)$>_o`CHYalb>~hbU$%Bwh(cOka+0iJv3~&Q4m~7}a0Hn3!S+}n7NVj1 zP|kMmFGrT-dZlk{sGqmWyOSoEY?%&Tg;K#>1)I&A!<|`5w%li5$@?RXsLxiNgVvGl zh?Qs?bVrY=5Kn3|Lz^cd6cLAFV*edWLM6n03h)!fl&Y`;Y(xjTQRO;n&bGghtRv=b z@COc5wb{dyqwM$;bOUQ3f~XTMfbz(_ zHHg|su{o=_<1bbL#Yt(cC&NQp^RGHbcJBJ3KYBZGh+8aL>bGSRhqd!P+%jF^W$ZVE zD&n}5gao~o|44%r=!JV1pWGrI0l5SWCGGOm1eT`Pjj|DH>b1|19wd{O`U?nUwVHi@y z)32?C$v{5(skX1+JHB!ys{o1rKR-fd#h&l}P2?)mXkIQC21wdvP`b+7B!?FNAe{JF?#Q4#O=aIHBWfx#3o2xvRn$>*WhQ&2 zopiy;6;~rzc-TiW@eyIVF!j<6r!OC?I&!3#BNOg2{4N@=-0I`x6vD!LZObIYgn_nc z!RDrG_b*jmtmYs{V8vwS7p4`eJMR+>H^nP&N@&*sjF)$)vy+N$l+uWPj8H3?v+BZa z4yncBlV?KrRHy(3dSi)OQ?u&!R~K#-7U&Yd`t)Ns56FT{Ia&gQYd_{pMcvu+IE7QU z)?b>NgOuA-2dc{(kE@8YJ9U;W+hDhJ+4>WgS#nBRlee#;jD-?yZ-!iwkblX!_R-Q6 zPU~0U?0z24L~dBCU5Cd`#3Z4I@S^i^vpkD&2I7n8pGUy~+_75B*mRdJtXR|t8Vsu( z(scl_R-0x?wuw1h6SFn$B26TJR6-5|)lBDh&Y>IBAtx9Z_i-e>zW9R`Zko!OYxdI) zPga|Cq!}&2d%k?l(XXSq#FCWK5*6Int+nl~l5IP7IYx3WN0aNDQP#Fv(r_rq z9qG5X+RK@Xlj;Tz>;wsl0|gU$W%lCGi9w$dKu4rFBVif-@D0^zDPJ=t zk~fUvH8JxUcAs`tQ`yidl)=ETN92eB=t;n}pAn4B1Ro|NKp)_*+L^H<%Y}U-3}6&L z4BGwE+_!3z^%0Ho>WQ^WVnrVUM~4CpUL~SA0-4jf#}A%Wx13zNG$u)07UMvbLUo)9 zyeI(3hcZRw)y6&Qn_t<@bqH{D_2Hlv+JgxV@Q(FXw=a@x-M;T=G&hJJ5dKy6R}o)X zQyK5eBxNNVjjGFMPG3HI+<9Xz`&t-|y-_Rv7$d@=Ac*+-a?_cXGskys$Ysd@;Wa}P z62%Y5aQ&k5aL)W~x?o4`iRBbr(|4lrGS<3xS}$tXX~pbtou3sco_UxoVZvI!TsoT* zuGeDRE9;zL$JDm`W0JvocCDyZvP1J_gZ)|-L_>?>7KJTlM}d{&10JT`@h?-RxLX8k zruez&=J~I0H696c+s#72WedYwN_nGLw`jjetwuN|t#ICwyID*|l>k!RSF~7;lBeHX zd{oB$3~68-Sjk=E{d>qNED{-Udk%R=dk2Sz7W>OB3udS6=zWGBV_xqVcC8<* z9c&&Fu}ECIj1dM%<6%r-E9C$F4knU&M1E!pE@oZ1q9Sua1MC0CmIuR*vW0FtGIyvI z2#$JWDn&B|I~N~;#2osZxf-$J~mrP)e6d$QNriN=;t-RK>c|lZSSV9a( zZRtD4Da6TVYo~RDvCGUy;F=s|E>>4wx({fiAE8RIk!fyn+X!sKCZU3XoIM_5E5T;eMy=TI+iZUF7d+?3K36U!tN=n4u|ZS^*^ud;pg2Qx`7A!i8Tx{9)W zc{PZZOD>;Szig@9hGiUe#>GZV(OGi5vHUcRsGuYj#i1kh@@XT&03p70<3(Uzwvaze_H{=Wzhv$c~?fVDIX*X%;X0YF$Zf_<> zHDHe_%1_aln#mbyQ2_)`+mOo$LDh)7P&Mr*iHwem1_;SVD2fl$hQxx?l}L1tPrL%QHGrOTs8Svl9!W- z6hN|)pLRlc#Dt~fM;1b=Tw)Zt+YOm%cx5}Krx4?M3xxZAVBG!5b2OvqS2jaW0+iWZ z+p0}>m18!n8_U9rxu5iq+}sl%UCJE^D0N(^It$(_ok5qO%aFZly7UL>p&~YO0X$+F z*#hUy#!uDsxlxV+;Qp4om#D?aKd~oLBN6$pPFQKsFF-jotZ)#6zB)l&wvVJwC}QGdd|e zE=HD^`1v3@QEig<5!W4zb=PCvHRmT_-JB$&HbY$3@b|i72Z^Z|Kev7L9`U{pemb;h z?&#l|x4===)#PvTR}LFS8j*UvhOQC(p_Pr#o!Kv6feac{Xfm!AWEmXpNu6XkFh!g2tgVdrrJGvTcj2(+FaXXR4nBRz$VN#fg>o^*S z41V8E(sgAZDS7moEPwsz0txvH!Tl~TdS_rV=kX)piX@MKps>(me(|G65F=+Elf}eB zvHwA{iQ^9{&unX4zi!*M_3Ik9ojudocou09u_?;4+Zxub+vd1VEIlihcI-}uI{Y|j z_&k39=i?{u{}ff?kt~p+>^lyc@sBar(VVO#BY;Qh1v4=cAhcc>s*l86FESDzl#`Jk zYDbr{7o4>tv0T*e!`fJ@CrEG=UE!0$3|1b=DYVgM9qV;Ungxit6U_oUj#)Io?oRLx zWZ@%Dfjk1OFBWp>=G{`#%dtSO7-)-%+(JN`-b!I_lZnLPFxe*ZNzOnT+cM|bWD>{w z30OM|geBNk+<{mp2sCvw{;F8qLFYmgT9`qw=86*XC+lhHL;AHElt70jfh2xCCzwkv z&OJ6FXOV2)a7Q#7y;bO{WaG)ci8pTCL(=D6XQf9s+#ZGVBpXp^XEG{ z>K8UR0V>oRw$p&xjlC5oH=91-k$UH>FwK3S!i?pM_Idgr^n>A z^R|u%U8+61&I%cHtM+>7H+gwk$HsbjZPI(~wcgk?_txxIx|*)G`cM*UwDQ`kKe>1B zsis@E?%X+Z)@qqySkb&=lbd(e)V35KJX3RhtxW%XHaKerKEI=9uQ#9ZDBdaCNdBV) zjrah3L~ii`uqN~I`DZGYv-}D&v9D%5wOk?M3x1|Q+enT>iRULpnc}961Ux+$AxBBZ z&zUox6AGn*AFqJkn=kLpD}Y<|WBEeq<~*Q%XZ{Fb7r94x_y=&pV8MzB4DgKdRO5xWVQf#?pGMMI zH#3EU$o74&zfylnuV=|}emXf|>i>*5AAWl2+?%wNV^#`>EShfr-Enlq-oYvGT-$c`PZ?V>8S3s@SQX~#TVl&hhI~OhK_C+My3gU$y~t(Q%;uL zjC>asgcCs+=*A)D6hfNX7h8!^iZ4w;q`T?Upm#6L^)F4k@H^^d*S3Yw0X*PQ;qKz+ z;pST7S9hSIrj9LGsf-R577If*JHU_ija6@4YTU9iL#x%&I+^na$lsxA2ogRHfESw`@s>+sYLz zgpND{z7UO1%}V0JuhThBbX4B~bcl6sT(ftC3S#o{arSkF7QqK{ z6Bl-a$w*Gm&Qxa^l4HT0zJSbvm?SZKO@>-WWp1j>1Nj_|xY08qo4rB09>fLwMD?hT zu#C3RHes1KC2jmNei`{^DweY^Awwv(Cr9ONy+mA3Q8LY;a-?Fpk-frHtDERHY$9^9 zBgz!&Y&9M1R3E__j(JW$eMmKA2(-<(=_78_8v%k^HN7Ten(1;5S9R!n+NeB1(8( zmHaAxh89AhGr)ULMqj^yqiV=oni)j>x4)Tv;1_H2lB_wP9{VEv z-IotYFWE1#`RDX1MSae3*QRk9wi#O|)1HCUBAA-JIgZ>YZh=)eS&2bU#mTFB)xpzg zmqM~vq*IHOSrySgq0c+}LK7XTqsu3*q+LTR`U2OGL-t#Nhdh(^7VaPq9qq<_bVM(L zPNWaK9cVq^c>4~ZZMhCzqq{bY4IH~jiF1BTgAp4C7q(i6gMi8ad0GFI! z0MGzll^u_fNcK55_fy)#iGHF6kah*|#1O3IhLMjKkS`Jl457YJ&t{Od*U1+z$;UD@ zkyhv#fYwS4d7K_jbKh~~Z2M>>$pv>s1X3m@vW@emS4>uq8t1uoIv5yc0D_%Ozg8h> zc_@Btoyo4b|HSiW^@Drm4L3MYeoe$<8%gp-zO48wCR^fd>JjwpcQM1lMl$(W*DwwL zQb}xFh_!QG- zC0Ub6rXg~$0_1Gu3j`+CWOD65xphJyE#X#?i2@(^Z)pQ2t%gG6sL9*xFp4NBV!^UU zd^B)}h@sb=8k0YgrrwQ_n_7_!@D9Ex|10t`Cr$Y?8;R9#U6Cg|RK9rKy2XIt{vus` zc3lfgc1s|sHO7&6Z6qPf$$=&C^^YQP_2(N;pFApSOYGA+>(a0jR4%v-vReOo+7EPu z`-G6y_P*;p7l)&5eR+qzIJ*2CfUdWK9u+K4x9yAt<|DM)7MYfDcdo2WbknHu#qM8w%quG z)6XorI{(J{`)&{2AH-ZtER}Wg$g_zRfvFw|kx9yPg2wx1 zW6}~6Qxnv&F|qx$W}0;9P6_&H%YxK zD{6aUWcbF4n2aP@(bo{k?w#AX6lcHY%C=jcGLJjogg;O}_@v@P z^kINJoWx!aBALi}UJ72X@L5RCi-9^~c7 zYTv+;liti#w8F!o8$^c3&>r5Pf0NR6@j{TDFdXh)VG(~i1VjCUY-V&;RCbI^e|_#x z6Ik@2{K0^td_%gZ+HC`spikR!h^W&s=7+8febz*_!tZG-2jayNf41b^*?+QV;Hdjk z1Dx*_1ejk+d=STbDfK}FO6sWb*MuO%D}5lADM^)PfQHSJ=NE&93?b(KF`ocHv8X5o z@T0(XcO(Q~&=vA?&}0k&Ju|9%PvE4x`}z83yhMT_?-iUXo$T54j#_(pHEq z){0Jrx?JncC!#u)?5x2of)AD;Z)7EY;tz=&m|saSgG3Le!=2XtQ>6{_34im0PF?Qi z6ILH85mpE*tf)7n%27!JZODr%)#v3}11D?*eTHlMiqAAh#p_inCvkwmM~~9jNTNpr zG968d<$Mo(we<*=19t+JKsYyWzQ(TD*iO0CAtT$7YyT`=WBN=Q#*AQnyk%o?Ux~O%Kc+au zH``Y&7+WM`G-Qm1TP(C9+Qm`hC=KGAyLV?7BQAjz!7bUby<-^CtkRKOCI*Zid233&AOfa?zja72g$abf2%fH$yI-X2Bu zHj>xo`Zn<)BflwypWxU=Y?FT~6^sxG!kIN8ijDJb!hB~rZ)^jFiZ~-Y{qM?8EwIji zw-W{QW(1i(w2^GWyoO_@zxrec^fC4&ZL!gHgTLJMR?jYo`!)ejGD9vRCetll|k zJ~fk3vw7>+x~jK2|3D`1;G&xRNiPqw$&)Po0=X|yYZ4}J>NjHQys5LN%=u=B)tT1D z-MQ-X&9-!Q6S%U+b^f=N(b-qO8~Z{HU(ho2&yIkg1O4&6=r(v}lFwzLRC+g&i)Q&x za&kr^tn2t)NpH~$@V#6hKBkY5+IX5VAt%9yo@T_A{Y{pyhQbEq5`T=~8}RwpVbRu+ z2E|!a&@Q8`$`_L6mrSjsc^LCTlIu2OBBS`RhT^s8d!g?t-`zDtGUEpZo}xa=B}uN! zxhc}PsCWo=he@`JNe-)pPb5L{y5c0342fXI33g9G_}rSw6sKkwN>qGrX%@6&+3ARO z-;t0np5FqmLbrFj=m=;c1u`uuVFiwA{*QLJq~1N2+%jUbtaNN9k>(>&;Af`GHj>h=EHA+K!nD_wMvZZ`bEdsvYt zGnq-(7d-so`t=_kF1S8%<$70pKUQGA4@nP>N(@1WM<}M7;^~5AR6WA_@Q(GBtJJg$ z`Uzd8o|u2#jf?k8baz)Fo7Due*2Vl1V#0HJvo5hVu7P|CQe##{Rh@`h7#rQ;dF8Q8uc2wIP=ADF1$crQIMaXU!l*BkS)6i>Cc~`cdabD zbdmc|SP-rc2oIO($TsCf)PXwj*IDNzye+(z+=hL9(HmZuK$|vu(yDl*xOvkQ0=FY5 z&?<-*FVBgrmP|49F_8Yej?M~ z%J_dt6_3D`=+HhXEP;2HwVB8Y2^qVK44h8j{09ifrB}=ik{7Gf43v#KT*P(6mlc0wv_gU=$@bQU|oAHvEjuXaV8CLEFG- z#1Y?H(|*uX{`S^f{}u#~FY(5WCdo?pGW!9rGo03|g+-JQ0uRO_OfUuYNh-#}fn*Q| zn$}(n=|7N8d_-rf=^5x(YVmy3Iaqo`hJ&b0lo;zCgJuGeN*nqPB|ecH7vQR~eWNlT1*rDdJmYo5Noo`HEmC9y0tDk67f z1Y)ELF;GoA>c*I5p}ajFcE45n68s^prcOi>vZkIv?XMG!EPG?xrKD&vV-1lhFw ztu`h~1&rZqY3=FiuPe{Xh*{Gq()E`5y<|r9t+g01=4i$}?)L$R)K@}B%%fu{yOis@ z35n73)gVgi;x*_YV#9wU5XeWrW1O@X`p1$Rr)ZbHCppSqzKML`5o)C6A<$$eC#|cI z4mDUlY?yTJM%Y6$d(Q8?_t);HWv17F6h;|hvbC%(12k@G10?AYBEkVP*%=sxsB*M9 zF&W6>#7UOJvtSWvDp1~AesKoia0aBF8uZe87oj^t=Jx>?59Au@tPe}*f;LNjE5!*Xt{Cm+qo(^ZW15Mi)XCJGk=PTjOYWh8yTERBY^C?=t=YN2Ha57 zd^~4Uscs@iH+bP)nnt&&XaKwoi%B4hyj3&{BVj*4GnUqeNZd%5#lNzC2kf(5{9OEE zH&wdGPR^^GJW(~lZ_1{5te=a~{(!$MHV>k#@C5Fz%qcJ6T3*zN#D6N#!jrL^$%wI} z59@bulMyxe$JnEWTb~|+A07iS%k8x1+*eeX?J{~$0-yfkd`xuh7ui!kP5oEuTEDa@_1t-K;=$F5H z|9C@ny#+@!fYp=!`nnw~tszT`PM;x~BV-&I2VYW@FhQ7ri;@M-taQ?4AURH17GEHB zSOYb3Q2R(`(qXv!!}Ns@nBNQUTlalU&)C3*sHRf@ zBf>%0hYT-eyE`FcP~tEG%ZYnnNSfP_}v#m8>LmRL)-%27it2F}N z7ooL33@x%vJ6S74{EFlu5UVz(c@h^2bqYgBZiIDYZgE_(8sPZi;w&)pX&D+;KksH@u2-haq3f&MV1d{xfrXGd_AOk0y zI)c-<5aMsq_k;68XVr+~!{Oja#Z!hHWHfNiHjr7>$}gg_JU6=!J&-V5PWfC;<)NZ?~>U5ktZ>u{{U2`DK`aoKZcbZGB zU~84;;_cz0lkuZk$a*=@(YBb7cfus4n{JnnTj$0uY2Gzy2Wok&e4wTpyn z|4Fo)4>wT2Vk?+khG<;|{+WdHAeP&9KbHR{I37(Y{WvUqK&5~tmV>4pZphHwc z)KmQWP7)4LJ{`B3`s-rSVhnNC@djf8gj-rb%8jg3ERTwTS~ZrFJ(|CkOruvZlMTlV z36SLHW#^}J-;?jfef_-z75M+pCErO3uv!{-p7^I_>u@C2e;>(*qr~!Du^KE#uhNM8 za0wEr&EMNFL%W(D@<3mI2dptcI!+fLb14*7grPe&gF0cbQnc|KE9yjq3F=0_03OkUI8_fU_5g9>tB8ddl-Pwg;!D{f= zFj+YndHHZtpf|n^h+7-8C-O47)JEc~)BIt&jdRmW2hvNiyRtnhL#$1FyPTmvwCR=P zhYmf?04It$bT~lD9bL0kAMHUm3cQt`ca*lh?;|d6uj|m8c$2)cIJ+ixkM%%uNl7>I z{D+mT#kCpU5l<@r1*yS%`4S4hz!>AXwFRovG>JY^dd!;?0>XOdWIE+rYW_O;r4^Bl zA=9UjH7So%Zf8E;CmSUdz9o;ak;xJp@y1#uKNaJ)SAPv0k>*1c2kFOGK4n)gcAGj* z1tpG+^b3*%$9Dg3iS#~Ol3b!MDZ$^z{i*am=|7E3R%7u-P;_p8?Dk-F3wPz+L70Dq zN<`;tVLCp16nuY?=mB$Tl7USBUoo}p%IBIGC9J$9$&m003;a^xmnj+jQ~IkOyt?F9 zJ|#WnCtfnP-3?xT!`j5qj02TP)3Ar)z3@r^XcXv|@2K}d?ne+QWk-md9T z7c(;YS}cl<1~huGwEbn<3nhkNLm7Ukge1|SN^n$sn0XYWe7Nx1q|Q1gEnGOMbNxxz z7Cr%KxB+c}TxZ4;W&-K4 z6m7f(&Bxy=@Kp3B+M#6WM3AH`MASwP+Urk{54 zes}>UztKfxKRsmi2Qt{ncMMiupTw`QvG~)5PXd2k`>r7Rg0$1aptrO|=8&z)SPL5Y z7UBr+$daSJ$|HzJmjXM5oi|^&=XonK95R&nSR^a}u16lj`mmP?cxnjiEXBV-=%_V*I>?fabSQ41!Dx+`70EkGp;?DBc^ai;h zSVJ1+2JM^@OnGa-eo)R^BNUC626U>w(cgqA!W8CO$72sj8#C!Y?R0lVE?Y%(0 zp17LdAnQyk$XawtN=!SI0TrG(9!Y{U$O_1c@V)ypkHs9ej;{`{@+pu(vsDO#JJP9g zLxQUZjiats4$g@S4sSiY^?Ks5BXCuYvm!%mX%TIv<{?8id@&2Kb;>dqt~@;OTn%W= z81$Ccj&Yf|dMSqm8s_I$=W#>(s~!hEbh!iZh%6UjX5z}D>%LC3PEJE=r25MfjpsAC zV|-KEzUX~{<#?g_&C1u`J$U`wlWO>6m$L+8N| zML1^GNC!mX6e`*b9v2-shrmU*qpd%)oeQ_Gp6@?fExvL6(RR0h$NaCi4XoQD3Y+Z4 z%LefEPpdSDpi2kA=KT)4Xad>yEDU%0(220x=zT)BM+vWWL|SlO3^AKzl?cicLOU~|NTN_@VC!eYW z3%Kwg+_O#2{a3UHf<5#Q;T9zU9QYuvcG zbH|UnHTN;cH$fvB4R3-GNt?Q~#LPs4Hr-m7$``|?RtCEku2C=B8RI94Ye9sUibLxY z^emHd>@gC34$#{*9ota!t^SgXYTsO;M(wg2@PfY3qjt0lBi_* zd&KE6Nn?}AdkQvTCOR)OORv)B<`(*}d{y{fL=L7zCp+8iVeh^p8~F;nL!) zQ}mKT*RM9-X>4uW@Tb>ZnSLBuGYpU&(^cUorT$Ygn_lAeY+Q7#p4CUkYExNqMTi72 zce-9x=4x;$$<4_OsSKqiHX89dCs+80(fvv@0jv20=qfcmW8U9!a8O5@NNS(A=KH1cVlP zfcUahM8Fvh+?VKa99t?0E(kAXL2pr9P*B2|uJb*VNWif}fH9AyWs>0V@L;YTsX%pR zSh0i^IaewqP=B%m+h`$2Mkg!vi6jAR%hOoJ!Dt60Hd2=)x)B#o2a9e)$FpZ7P{=dM zk(M!0^LN1rv0$NCp#JX~5WS*C8_8R9laXwd^X+tm(sj%RuV_{q9-b7gc5^ctK@dOj zl=JV4NI%(JGAtBN`Xm*ZR7CpUBE#6Lq~GD+$;4AKV{M(WPF+xtq%Gj~MnBu&s`6V) zzle5XwZ2J?!6CA!$iSq~O`CEysUrfD!O9XA8Mg&I34RkJ$J?rG^Tt}ErfU>X<1a@3gQ}xvwsvF){?VH#b zjjwOAQEWFa^RYKZJ=9zZ&3JB$oGs&^ddk zfm+Ki#L`_XN6%mwv3w0=^?y8(bYpiAE(C(_R!8R{cF-+Ta`0g8sv56_ZD0`g7f_2XS>Rrv;n&UcNv`a1iqR6 z?SSL7o6N_!JAAhoC`ilX>hg-}BkN>j$M?#4@Y~7BXg~#}GKFd=woC~03fz_9v^S8b z2EL^>7wKr3Pj+Q^l{zakB`piv7S%};4S2@0scx2Z*#YXlYg>zdGXk=WH z-GahgWm^Ka?%JUC@X9F-;9{~Ezw#)M?O=>``q-{57v=NbPL1@Tc*q*4Capa`gD2hW&<%t_^Mt%M6Za z)yGro0d%E5kcxw8sTCvuKJp5U-cjHI1TSr60&*%ME6{wTW@K{;XMm+XW)yYgsCPkf zesVz)gp*RCD2?3zk3U7gow-B0HggqCffwv6WQM57v1cuZg;chdi>(u$Lyhk!s{d9;6?zd9y1Nd$Yx;Wao` zjnto%h*axjNs=goE$$Qe3}!a%x|Z{|FI&~*FVp7c>GIVPkveS@XYU`ls={7IyEYSM zHtAu=OfjgVJ>0Y|>P=g+%eHZwDpm&hZ}PJ*UDf0#bGvaj^uBt3U0P->w`td!pq24! zwL9!H*UA)j_J)R?O={$dAsbZT{5tp9!Ec-0H#s?M+3x77UB2H@=3i1BwMSi6o>_o6 z*mz?7Z?dw2IAT;*YNfCv+sQ|Ji*oA2YoKb@*6`At|Kt~w-RrJx4PwW?=fK}ZM8*n>^i^Sn&@V*ZFO+Z~q+-J?AWOQM-nSW)`xEy$ zhJr|R|ACwBiYDL zBf-(ck1r+Lde?)Ua|{gRy)v+ znUV3A0RtNL1D9V}ZLC(eWNco`nG)LjEBC-RxzHz@&4}6sW>7fmB`cRvGfwe9m&R0* z2^ZiagojZNGEjylu!^HQU36L(j()Y4E~EdZhgI}EnFGN1IYVuF92+a8-NRdG_ZpMwxMoLO!Xj1%zxX2dW$h}p3L#B9; zo}XsO&y<~qk5^hxdZ}+-42ikH8IqaoJcwd+@9Pd3LL25NS<}^Y$MlEN%PZ11gmc@P zv-E@qw8nZ_g;a+-dM1HHbx7m4}jfjo6`o>nq%9}vYmZy z@~)PzJbyG}e{EKy^&Ngp=Ar1rzI(0dK=Orq{f;`vYHR8X|3_{}kReb#mu^vdl?K&l z_iGPi9VpwImX?;9mIiV4K~^sHtFoOu9NglU*EoVAOP87izP19ZgWEHbh}RCrw35HC zJgeJwY@OOJ*XJ!{S><#G&$oLp7$a56c(nk5cT;I1D;hp_qZQ&-!_nLpFd*Bs_Ezve2TP@ z=|B@r10uLDT|QkVbTO?_R+X1m0jUR8JUZ1UAi&2bpuFnKfM(~z>|y7%<#uXup5wb* zRf6>+lK~w5Q_{c9$-;j>$~^>)0nNaVF=7Pdr-0Wc5K9;u_f3= zBVtzs6r_vvp*QJ6laAOGjbe$45@U+dSV_^um~Nsb0o1I4HR^rWz!=Z@<(~h2p8tKW z<7TbB_Ue6o>-*lXW5{{HaFAa2Ejk z-y}#pgn^%9GI%K>&Yn%&c8bqCS$3lOsI+F`+@iTE`aV3TL4Ql%CTjPnkA_;b5``xj zr~)a^{v0s}v)Gd+90&U#;#LSCWw?XRT8|v<*TvzH{>&FxR02$c!A#uovjt@?bUC@^*#`aq*U3=of zrb{ZTqf9RL8~y4ZGKzPf1scO$`E^uEk^)yJBj|X#j+g(6?ZXHxerxf=L`K%1IG!AP zOcNWF5Re`qE%o1&4?*UU;KOyIL$JdVgOoB#BfkzbCt!Dz;YU-BMjr;&!rqcy<}Gh-*8CG>gX*|zw> zU5^WNaNb}k`SFRuKXq|@06#b6owui{)_B+L-J+4Ve0YEidX)dQRQ~JwQT=BO4VT8$ zCGOs>{O!h(JGK0U9j8w0JSRQ8Y{%SrN^%#vL5irOY!QtsJbUeDK5#?-0u^0KmXH5u=wzx%GTA^XgZ{m`j?;lX>D zm5KP*d411lcKBy|`6|8By)(S|%v`83s;w-qQ|&w$6{K;ewz^fy#9SO=`FF=(pYuzE zv@E?aAyx^|k38IYIImal=p|lf(eV=)IH^|#9W-+cT_g=#o;GEP(miiZ?i@ZfL7So7 z;J?dX<-0OugJw8cRX$!BlM#aIg3mUd@q^bToX0* zgTp6woKn@)WTw?x@LRL$;P-wRdYCZiiPLBa=*(g*VZ&NtUjIx{e@chPVNxuncwz_wv=UzH6xS zA}sFF;3WmxNwhOf-{vRHitw8VY0g=|oGb<>9(bR%bcP|DR%&Rh2j$_EmXVPLrK*{k z$~yo1Lr8p%G#8Rv(LazQD(rpCV-nA3s?w@-x(duizdII|rB=iiO1Gz{XQ!z~mr&nY zIw6Sq`Ofg775$}Io*}(`dE!It?l*(&ZxQs41-?&$6VLwkF)=&7=foZ|?CSCFj^C>! zQ+J-MKd~S9$0rGp9`x6U#w_dOb1nK3qSlwTockE`y1`&(+LgI0t)8a|u_WwvT+_BQ z!6%%kUtg$T9^>EWb9nuJCmh^nwv$b3cCD!PEOmOFhL@29QAln`c5p~=MraS0QmUOo z!aU0Ys7q{tg$eM^1ah^^j+?6JliPA$dg0t|;4hiYe zk0g}QFxOJg>J{~?oyexgfKnU1f8F7YjR8&|#m#h~n@@ZJzQc*@*TRZsqA#siCs=E*ussXGaL6GKD@6H>LzgWxXGpdMD^*?b2#zPu-il% zE6T0kUcXDZ&jDa3JHSKn1)xvL0Cn;exlNe)CHVq?DCP7v-=dc*p7qnqpY=1yMb8Q( z9WXoaE`q}x#j|Dlk)n>vl8$Bi5gp46BSgCbw?XgbvtUuFUxAO0(kIzB&X4zY znLdwNL`vy95^}Z>9Q-*ylVm;MJFFZ@gyDjM^c@9Mg&8(CA_R?2y5K1K75_8Pwo0+N9&Fq=IMl9oi&Q}{(kG%2Q(bz0d*!% zcwc*T-=SkX3w3P2-v(fy0Ta(*Lx3*{l{$24M-GAs9i-vtBHBeliKt0Fcbb(o2dN9hj&RgZXDIy?Jvu_(t=&VY2l)P|(61$=>dKQ4lNzhs|6nwk_o(|rt2ucY~ z4(8X)n;PV%!h+fZoArf{_C0F;MiVtVZq`gC9dd018QpYNSJcGk>|m%4O|>DO8pFJf z0SfokZ_S*!`m@WQp8V|k^^vKsEhG!uR&_9m;FI$7V)GrKd;o2`g44 zdO`kt=~u+*$GS)L-)g?R`A73pmD~nZvl{9(-=+&RsGw$uj0PxvjUqj#UEy~I`P6Sz zg>H?HjM0RWzH^|H&HRxxzo4kFNLjhQDkhKD6&*fQs)TB|^c?=M&(fM@DvzaM>!3m? zV(a#;D$HNv28v%Q-(gakp_YY4tU4(`)N$z%Hc@WBdh9@Pi_ z((Em)uG`N5tsqfiKL(Vyaz=f_PiLgTfjox+rNC}Vp?8PyMl7S)8DHfm^M1Dq(*>JSz`0-nXF7O8 zY^5w+TjKolu&?^uad9GJ7AjKChn?|1w)|7CE1s7&o?Lgr`((|P@n=>p!(GW1#|3Zo z*}mwS&&jMyM^1ujlID2)@cZ>pBsE!l`O`qJ;~LD!vqka<{jUZcFrXb!8kDNVM@F%Q zbfgkj99N)Y?xY@^0dLQV@L8%kymU_W+c*k~>9onXhn7N@onhiQ*|V_{!~#ZxPBAnG zHxO$m-I_OvO#Id9r<9+LU%2sk`DbTNe0sn1&WDG8km_fOQR1=SshBS#>wAgTk@b)* z>J%$#Fp^hqu_JUgW!Rs3ESc<6Goyi}^7Nu7gm%V%5vAC={r%ZciArZKO7%7sj zxBX_{zT;RNn;sFHFnK;TbHxT*WV}UWT>{9~ z>;~~dhlN607LgOHowa0;8`Rc_q~4wbhtE*q_6*3KprOqe`0Kl#8XTg`hI~G&IkseL zx;AFxJC0i1AeCuzf}I6_O}2uy#zV?+JFp2h7t;)p z;jVsy;w@0jGU%E!^lMR_RZrnaED$GwSD^$vx z+g-D1lIU4uM~h-4SR@b7sn-nNqK<0AdIiMbrepxiC5lWCJu3lWcBbARSDoXlz?}jS z{tpzhPZtnwdrn4fdbSgFd64}Cw52{G^2RU)4z9{-TpG;+WI5epa8l%^Lse-GSxkmG zW^V@pLzz=|kc4LxWHNN`Y??t-j`AvO=(3=K6z4w2bZiOJmFd)c{0HgTsafe6PPFIL zRAMb+sX-yE-FHOxi3nmyxw*;+{d!SOIx@j9Z-$AmF$8CiVFp#DW~8TXPjPx^*q9Sf zq~puuo#ZvcR;8wAKs%??E!>kOd^5d7>m+ZUw=tc0O>@c%IZLzhQXxi?>IlH*tei|~ zcJ}t|*%~PPjuYi%Z%59P$++Jq6*O2y6S!gvl-+3_))$W zNDkzjV&L1;C-a6D@#ME}{y}D(09?aN&E^YVc-&Rp{o=v_==Yv^f_hSPh^hKt6wrui ziSgZ+nNY3V7lgPjvoB}}K+xkmYz#*hsc}>B5Lgl(i`7HKxQ4eUOEHB=Dr3tczg1V3 zLAb=q831uzO!AD+fvF&}=q&AoIu92XaaRH?LWsQ~Vk88UCCGcxAjO8aW_!7+TxXv- z`j#dYI_(2!EbTqMdE9;A$&2qde}9h*2p|!3v8Drv_)M`tMa+((?I(fo;E5EE=|LZNwH( zPq6f(wwlgShJ0|=8Cv$q7#p0sgp>*+qN5{t!xeEvba}Pr14(sxc{Q)UBCalvj?gTY zkUXJ$5(@#e*L&fnP&&e}`g(P^`GX(qp?E4&LiO+s6!?i`y^JxcVFAMx)(@y@R^v;7 z@d}Mk#?p`x-T>_#%?B=j%WIly+FNJ#EZ5M{-mC;;FV4NG0oMM_i9Dls%>AEm+P0mwR#{94FO*>n4HHDg4c zs~+-9_YlHFL+BI9PSy@+3^8jAG!Eu1IG73t=TE_FBm++mN}yw6wU3FX0(cG@8VNa@ z5*00h0FDBho-~?WWd4^}-KW$^hx|z7^N2Ikpeq05;g1?JCG1N&X&0R@rD+}W74b4X zq)EUg!Nf6)(zuCWpzaR_>SVo(etQ%ZoIwKNCx@F3Cg7Gk1R0kmU&=b<%4}+G_|Xf0j)13&!pSbR9Nkb!5MSjNAae zv{C%ZY-RXf&!1^>;qJgM%;4)LB z$oe(1Ki0fRHUv3;`0pK-<#i&v;?=QShA~?a>q}oj1I%WeBOUqm>peo}spfg?Jhom# z9XGSQO*^yTBaMEF_@gr)wHWic1<9`uUT87*XsBIwuhOAi-8JB)WB6AtUYf_7Z<2ckLy- z-;n^J{cx&UHGr3|0HJvBeY#jBccoTC*DqV3IXhS+uPCYCoeSL!eOhqKW_1Y+Ch_an zq~ZwF36oRrHqL<;D$Nw=iqj} zBKn=?5LHSV5U@jzEnlS!h}i1y760U53Li?Gx3p5tXVUUb>q>o8@mtcP5{i=x(=?UZ z-M+<<(klP_;Ee!ENdj~|M!hRmMkN`(7*&yxSC^Ql(&_Swixame=4gD&!Ya4!m-;m& zHGK>+zWYw%bZ+yGGNmpjOLy=+kDxMMw{3gM)-CA)Ta;_6Hl5ymwEO^HA5*tenUj^B zQ&zt@p@84Hv3U7v3b@XhTa<}A5({-jd3l9=^X{vk9y}{ObF&JFc^y7m6g8Q(nKgV2 z30VX+SV}TmdfIm=v3g4t5*!rb)3mBCRC9Cc>A9yyNL%QjY7nI-D5=*1pzqtzk^Gj8 z*iD%EDYw=K*Zcyp_hmPZ^S_WGr*Y1ku7va-E>B6MLc4rR{JJ^{g=_$o>??|oPe=$; zm6L5Ea$BY!qvtBi!*!w2PKF}Tg@Uhp?Z`a%QJquA6Y~AB9Sxyz^PKc6XhXM%!)$dY z#?f<4AK7em2W-!bHa%3-Yhj5jNGz43=}e!*U)L-&VTexRtAsH~SrqL>J+zcQ!QtEu@9w0{+~Tjum|ICc1# zx~Ry0$n-*655#}n)z>Zst$vT6N}WpRwB?6DI`r&Jv}@u?GqWyds-MU^*S7eI;SQpxR`O|6jnVA$%< zJ@ijv)p8qq!R5y?xfJvof0T_OwL5G=X#g6|-i1cPTq@{nG3XZIEauz=c*o0yW`aZe z+67o}yuXW5%Day*vCs)Z;$Nc=PqLlo##~oAh6S7iLpozy^ z5FYMvVybR#h|`%BZ|{3k1th~~3@cnH7&3}&hQ_O(+k>x&&Gu{^iY$w*WLs(8{qjpU zz;gnkTzg7AL^c$>K4!o{XSoK0o(yUgG5tDpFsxNOws3DHj}$;#F*}H3vV@v#qN=wF z-YR;V-_du6bA3PQw90EypQ%2(R?$+asc+ly*N(^1qALZTeWuhO)w?S6a|{ylmtj#L zZ+I<~UZFR(8D5K`zX8ANENPblG9VO)3o=%D=-vVwQ3u8kMmsJ?o*Yu+8#?JoNWZZ4zmrJ^ zdf?Pd_5s6;t^RD!%1#q^F|~l-OD6vd9i8b=kjOg?ED|&^4#yfCq2Txo1Q=b%6GZjg z12H`@Jdw!%T8tOA16q!azTUXIN228Wj!yDD69p?Fn-y_!5m|AikSB_D#L+0W>y_Q) z_m3;hsxB>cVyq|Zv*{IIN=q@&aQ@or-6D#N;FWC!&r%V*S{clY1SuFsnh08%;-)KWNT*e;ols z+-vV2yb?Yz*F20}Byqb&}{B9jteD6c~o(?x4hIgJ)d^~$}XwbpHgXcdv z;3G9S(@aHCQC3AlkyI`gXtl*rSqWNgLRM69LXoy2tGHN7CQbz-W7h8Ia_^&#QRP8d z(b2xXj?q!z0*ZoK;|{lXy(^-2XO&ktH8gv^w#aR_v#Fy&UoPhWc9pWp}7AI6> z6%|1r_V0?5_vV~k(>U|W%ssDa<+qgaYqp0Z3<#AT&8~^eQig6^wqjB6gbkrzooFg5DJm)|OesjyWul-` zb?9RZlzweTrCB)Zx!-Q!%gT0E=LxEM@pwzp*=q*G#(QeLnS#cSjS8d!*mHS8gBqI*|zDzUdc7g-Ns4 zEn4g^%_{YYU4_jRP|L!kS!)W`Zs8x*om+W!Y~`kJGZGg{ zsZfCPSbyWGElCd(r#6^+m>Mf^e_M87ym!1!EX^R;SY@H#(M$A}qCUHq`ws|wi_YO45sJh4b*p)LNpdPP`QTwCx&FPPI(K(ac^Mx=k3`*;T#TSvy7ApNhMsZGC_ay;q$ z#`LuTkW2ZVCK}$Z1{#3FCeng?U02Ylra+VDmhHQW?+wjGJT|95uY8Lyx>|O=rcsI! zq#q0)EhDA7CK#S-CYTJkoFN>!DL) z=8o$-m)ZnU^_ppGhbB@hX;!*Fxcq3}N;>J6Eai~}#P`ilFk}i0eISOW;#b~CDnU1; zP9&|4%m#;7W{!%IM@XeqZ>y@`xjlQQ=3>f)+;f$CbbBgxRYFC?802o+&!oEcO7We7 zYYbCoI{`n`Cl`Jyg|x;9vm?hIp6DeE23!GTUergQMSMD*Y@+6yr=(L!&~sHUAq6bi z;f^^{nxtQ%AcyHTkU0+Fw~a>8!vIu)368o$pxZ`42!$MjlxX@zFCtuf*-+9^->Wm% zkWGGh{yiPvd9Rn~9OUHn&(2Ec(g%ttdY{$;-fH(79e2wDdkJqoE8QhcTUU#-61hGW zTZZT;`U~jz_PE!9JkUS?wYzL2@!QMy9|5faf{sFHdvUIj$!nZ%%H%f8Hjvqb%qC+t zGiEcdflaUmHn$^ZqQ!{?$vWsL5qGv=(=$f)tmQJ>9k|LmTBfocbTUa%%e6Ka)ba&3 zJJsc9Bs;;0EzFY1otc~czq?79o9N%&%$b|nf`1Du$b*}}3 z2(g_IO+TIMNOyuN#hy>+ig23E%2jCJDH-?L96J{?`X{ zoX7@n0?^MSNN;36(j0V$TCLkN+35lhrsq8ksN9ec>F*R7P`rL$6q)DjNGER+#kdty z;g>4p2`s_n(@RjGJPPTJqMu%xP#!{Uzm0MtlQ+?M&H+){^_2lml>tY!`zp!2r;Z*_ z_6(Wkb-V9?OSl=O8)-}#IaoaB(Z4QSc0w=49l$1|NH6{(#~0imeYf~iC+M6^G?oYD zYNO4&T`}bbe(l5nmFD%{7kRX}a-UP>KJBr93OesEN5J@iEWNUqFqy2xn0R0R7`^T$ zz=4zKwJLhE3Reh~m87K-$gl^{%Gb7$8{2RdQW;5Gq~uoTI0gNFHT_{V{u+dyP}$NH zX0VK-A>UDdG6pPPf6_l4$@eF_{_8E805;Q9tCyCMka4(f83V4sHqvT@(DLYsn|9GTvEfuFu0$N@MRE~T8V7Pw zbj(B1k0z6(e(g}O(6~Y|3Bq`bCfy~AMCAR|3d3~z1bfiw%*57nI-9~wCUZysb|9at z$s0hQ1gfB}HHJ*kKPG{1>c~{$c$LWRkr80@9acheT!3)j=MP4dn?}X~H$+|?(+h%t z7Zhc~=&XkI)$Rv2w3Oc}eIKh^P~JglLvCb_Ru!{dn;a7!7lFIA^Kl{TTzi+6e4VrN zH?k@BP)>DPZA5WIQD}5>d_oj1lOM+hOG8$L#BRtKnL6vMeZQ6-|B+lj_4U5@ziqr2 zvM=uV){>Mxar+udiuUiWDm#%Z-J4bsQM{ zu+Wt_eo*|T^tn6rSEN-(lx$1emKGn8yDc}OD!vL>s5aW_+>$C_*y*q0kQ`IzpC1+- z9-ZR9Bdk1Ze@b0>ZF&Cw=sM}M3MfU`c{uTmZ@uqMuf$Lv;1Dct2yF;CquY5{YODv@ zvxy2s7ktFCXk)NXaN@H1jqF4H#-_w0^+$H;&V?M2LbDeU>RVaG5$PZ6$Rg@;vI+>o zDUf{8zD}2cqzFF7F;H_pH@H9b{ew<`jzJ-qH^+WYPm)OQ>_rue4tYL+K-@e(qJEH@ zo0o%oFk6h)m7g3Z6R&4nulnQ!3MFJaKjH;IQ|WVk$3R8o?v44ukwM#1HdY2z1|3P+ zRk^z=|41a%Bq1YXfM1YS7hV>g8lD;(o*SMQRvTNJSDRN>n_3GcgmuqnD^hm_R|Ka9 zr$hzk2jvCtirSUGE3aZ#%5Leip`Er0`Mee3M^=>hg!_cYd)02N@i`rTxb{eG@tLjA zB^w9c?zHM{sQ3t0@u>Q$xa!=hywa-FYAIbzQWO#U))j8q8n88aU3EZpKx6X0>b*4u zjS>5>l>L`q&~CsZ?S|?s5Og@U7WC+0{M!@iZh&$5P|+Yadt@#!6Z90Q1V;qTW=>{( z%?6kaF&kkv+RW9=&1{C*+h+64)|>g5Z8i%ui!zHhOEOC{%Qf3&_MzD&vm0ign>{f5 z!>rwWn)yugx6S97FEaNuUuEuZ9%-ItUTEH6e$4!&`8o3s%s)22W`4{3OY`r|e>MNz zyxm-H!C6>a*jqSRs4a$DOtfgW_|oD#i(f4Muy|_GVew2T6iS3v!v4bH!imDyg;Rwy zg>!`qh0BHOgd2qc!cbv^Fk09wyej-f_)ugaau6v+ylA3mn&@rOJkcVNr)ZTZT$Ccp z5`84PCi+5jPb?M>6Gw@Y#M$B^agBJFc)z$o+$g>+ejxrs{8-{DnJZZ$@sg~S_(%dJ zp_2C`7bG7`u1H!WMDjw~M><+MQR*h0A)O~(B@L2plg3F;OYd3QTPiJ`Etgs@w_I(R zZCPYlVR_B+Tgx`f=Q0bKrOZlZD|3{MkWG=zlm*JtW#zI%vPRi^vL@MYvUXVqXU0i5 zp6kyI<=i-LE|iPr;<*$qlgr@>xE)+Aw~sr_o#ejeTDeZ{c@Og*c0FF}q3Yq>V_1(# zJ=}XN>9M|tPY?ed;XPt{B=$(_vA4&^J?{2+-qWI|rss&B^LsAsxxD9^o|}3G_6+YC z-E&9J6Foog`K0GFE1A`6Rw}FhR@1H4S%q4~S>;;ktV*q_t?I4zTD@m=-s+mwEvwsB z_pE-ldT8~h)njXswcL7`^(gBJ)>Eu!Si4)#xAw3Ouuiouw%%=h$oiD^dFzj?FI!)? zZn3^&{j2pK)}1y|n;tf{HcA_3n?W|iZN}TU+Dx}uXya+K#U|7y!=~Eipv`+W=WQ<9 zT($Ya=AO+jHox1n+5BZgZEbA(*-o-`vt45AXB%ysZCho#)AoSvVcSOA)3)brKe7GV z_K|J7?O(WRd|@ZHSmU7TH>U8!A_-5$Gl?M~WV zu>08Viro#nAM7655jlpuTqAdp50np+kCso9&z3I$G_{X>vpifLEsvL{$TQ{n@?v?F ze7F3d{FwZ-{G9xv{IdLp{7d;a^6%xp$e-E^?R(hU+V`?|u^(zb+J3720{eIDm)ozl z-(VkNA7LMBpJrcVztjGJeWU$*_UG*{+F!B1VSn5HJNw`4+w40PW(u)_Q#dL#iXn;# ziW!ReiX{p!#X5zbVv8b75vhn%BrEb16^gxzgNmbyCdDPi=Zd?EpA`=kkFl7UIaoSa zJIEcJ95fCt4uc$qJB)Fd;P9ryJO@vQ)eajR0v)0pQXKLeN*yX4>Kyhs9CUd1hD;A_ zolH?DZ}q0ko$0D~->kkIBI6{l2YODMto%Qx^x~c!lwP-gqx1p{`@c|n-TphJm(h0r zru619N-uU?kZFcw^E7~$gbl)|Ss)`va4`g`9`2O}%O3hM-jJ(mu|W(5j~ZNrI`Ft2 zWwh!VgIGBP*H^KT8h27JyDS+lDV>i3UQ;Aer&z&At2L zO=6^bUKUrDp&Z0RI8V(1w3181{4GgSqt(>L{P3WaGbt_&u@469rG%S_WF%9OgqO^e z$r&=h2tI339Ev>{R>#waGKuxR3IGCwdP|X6F;|#gm7?6X-zE=E^wnFd4T3 zRU}E0ae3+zS+$yD$iJK@1&m2a%B0-H{1l!WgT)SAGiE%~gp>kJb8(hK+k=sO{KDZlhYmtwtU8QFFs&!_^!XDr1R3 zc<01#s<|K(wCh&TW1x(Kz*-8bXPEl3m|J>cO*8l7o43$*-S>vTr-;Sy8y z#eh;3N1sC92LKeANdQgs6bD2vHOC;T@axSn{ZbmPOC4jNdO0dzV8LBpjBYSW&E3aU z!VVcXQf7saV87r}@_Emuchm;d_AD8z^Cjx0rXm@)lF=-D)LewDmqdVDpxH7`u>>;& zdi9t$-yFj&lew>y4dKL7P~SEn&Js^pO4Q^Yn(8vL!w`Oa)m%-!IvqU}DNByZIL2?{ zfgQVth2EpHWtO`0yrD%w($vpZcdQbfTQ>OEbd_OjtIRM~GX2=#bDn(1>St?2VRhs+ zbse-_#p|`?9b^NLW4H#D0E^3xy}hDan0U*KY9efSj_B%sRu`!xh}tc65UZ5UWf$H3kd@)B1zOeOj}+vqk)aY!c4P z5}?&`Swu$VkEmO{loY6$j?~zkxV(7WJ8S^Q{6^}bG(>=H zCJg)@wtQ$ocu52hqBqJi1y1{8BFTJNn%$XriX#C2Hsh z{EoR@l5s41OV^xeZa$&6ldW0Gb5B#%=mMlS2dyHG09IK?Ej26Xl1fugpG`me3hF5oWJi0U@2NL;O=KMF zK5oPpvk~T9E-Ge61=`x46so!UkYic(^-i2(4@RCI%}?X#e*9n>#;#eNleb2*D1VLj z#5YGQ>c7@$*L(FBs&4Ln=s30s=tsW~z??fsN%rHs8K)o1ciJ0t3T_GJMEypL&7taW z8P|K6D%ZmNNX;D}u`;lcK=Qahwbnqs2~vD)3bEkG0QKGmj-RuUsx!Uk zNfRYe*^%3$_}13SRu!m-&f&SFkLJ*JQ8p$!ow6dmBBPvtyN}uh-?>gl1XZAKPFc$H8nFmRbvPPxK~0d6Gz0} zBvJ<9pPW2i9|pXkqPzmgI)c%Mq{uiQuyX-=lk5HcxJt}I`ukv1jlq528)Bd)SwZM` z#=Vx5^ctS7hg@!^XmI4J*&5JkBP9VeMnt^~_c^F|)j2G|RsdpxV=zJIB#+z-DJn|W~c$4yYy({+$-H>epg<|ZW zFacvWe;t)0d=t|>o!9}{d@&dU=H4B5>BG{}!lFEYot22Pqs0lCadAozYbH~%-cQ2a zm9gIPj+z^bySi-{By8Ho0(oQMhckF?m+aebzn$=(e>u_!od!Y~SC~fpFr_;J_$~pQ z5#k@!nBE=5Ef~yaiDeEjZ}PW0ksIQ?OkGM&+8Ju;s1Mt`NKG$^XOPJv<6NYnEw128 z!p>nFXrI8^=D>$$#XxpEIMQEc!HMgz1=*?Q&d7}S*W4I2mMIk09%}>}b~-X2f0+tx zR9C&OV&`tw1I-aij64IR2dNZiq6&uVT+fhwdy}?@zcD?gRS5TnS6(lFRUU~Zt zGr1{hC|3h`TLCB8hxv3jN`Nj2MR4}m5racd&4tPII_`2TR%=j9ImQ`vjzNH&Ll)WH z1-sOJ-hxYArrYwF?q~QWU^~}I*jAW0sIi;kx}m(gkhr;8ETps%TQQKcfeua&b8)4( zppD}ylFQ>uxSJO*-sB{DHR&lT%hQ#VL4UNQD77dlpHIryW+$dYafZ~9BVO36iev>k z4Yb^{Qt=PPtU$mR2R0eDb4;ThHYq5Hha{>jrc!T(T?UPvE{aV}jE@Ckr6eIQp)iF{ z%g+Z+5k$VBQX6S6n$F>DU^SH5`D^+Z#)|^Q)COv%Y%piKs2_4*!Ux;SVKwfrF`e3T zB}LmI|DK<_Jy(@3(I%#*CM6`rI~hcVU7}I?ZzLR5PM3WnI+yb|?%3$yB}Zp;JX1*%x5s>9go16*%wbicZy09WXv?wq&avK*{Qjt=w>Vlf#O4VlEB6Sz1D)u;%-Sgin zfpm!(^;yP{)rrqCuuYl~pL5VQi&c4J6i8<_bcG6{JucWTRN$WWHApM_lc|U|A}c=L zY30iJ_^gPMI46!WR?g35dWRkBiJBjMXR}4vL??ZY77FL zEW*?ZV?Wdp9Ep6@sIwL96F0Vwqt=I=~*i~WsL39t`4h`JK%HrzPH$Gg5=^T`Ru3S@_KL-#SE+k}qR!BXk94+Ip z$;)Dm=)ox#du(`n=*mxSeSY%djjykcoyZ&h;@0vZ5fNJ>L!OLqEG{i6D=n7R)N=!; zPwVH>GPRYz|LN83s)E9z+@egbpA0;)+)>)5f4=56U#$%Xj7%8l^I8qJ9)jxkA^z8J zl*xe^#r!x)aCz9y1U|h$mr? zudY3Zy}d81x>tT#aF+a!l^d8~SX(~75;$H%F3~FrZAM~}R>gT#dK_G>0c@*IH0R7$ z8@^U?CwvdBUF++&W^IG-@#75*$9Xo+**e6Hz$OyRZYU{Bj$`|NOyR7>?a7xiY%Cc# z75mGPN3y+~-WGot-Gxi2#4UuXx+=G*5=S)>##x-gWj{8ioCzL~+){I{lc@P}YNdjL zck{D%CKSJah1mbDoZQl zK1Cm3jQ(z17W7baObWydUGun__0LYQ3}Uz32<He($3v zuqxuBQljJIdE+6Q=f?2QTErZ6Auil>fbVj~t|Rf=9dw8%0`Z~UyANr&9Z(SzkJ*9C8)Y3j&GGH&Bs>flCYs!aj; zrNJ5wcs#W`R9}h<^OKS?LCiwm#ex5l%u0`q3x^e1%&C@zZ42dk4bWSYyVH{Qxw(&%*v3;EmJp|@{S?_V*Kjj!&D*JJ8Gxj72wQlWCta%X47wF!J{zWT09y_I4KB73FXiH*hq|3)A}L ztd~D-Jd(S2FN@lbS8=K=1}`o=bK+|acLWmw*i`w;824fmm8Y}X3`(=+;7+>`0~cCd zqG}U&?@@9fV+*7L0m}z!15*VXqZ`b zE(sg<6!^ua2gi}8+##S=abQ7cz{;AK%+dY<5H~TWBS3=cN87{bE@fOc2a(cYkRz=i zJvefcwGxy#^Bi4)?$`&wKpvd17adFsdkMb~bK-`**qd%C@I@7cp_aosTQFMb3n0}W zRdbNhVq+b3#E$Ts0f##d(olUl0sff@>;x9f^75ZlAYt|wF9foeHp`bb3$d?Ro$MVkC`!#y>{y&H`tn$#R3otWWp1 zUU-8qybH|4Mju^&SjfLazx?nIPA|XxzqH7DSc=3)CDLR6w-Xhbbt1}bs7sMxg1}j@ zPtYJ}6nrH3s&}70e4jO~R;_&Nl-7Bzt6Dd<`n7Ipjcd(mt!iy(J=%J;_1o4zTA#OB zwef8O+6J}_Z=2FKuWeP^mbSRIoVKdAhPHEUSKGdA`=jl7yHz{iKBawL`>OUW?Q!in z?N#j!?dRIBwtw6H$5Ylf1W0-Bf21sEwQ23$>ejlTbxo^J>!#MAR&8ruYfbBs*5=mh zt>3k_wh7v7+MJQ{ptg~1Zfy(N*0cq+Y1{JJYTAypHMd=F`>w6EUC?gR-n-qceL?%0 z_MmocdtQ4@`;qqM_UrB6v6NqYkG{F$#lja;UyS_r{Kj~{{ciop`l0m$>)&vJcHjCJ>z}QEvi{Nf z2kY;xzq7t)eb@RM>#uRScH8o2Xpu>KrZZMUp%a*f8Gw)MX><*NVk?f>5=v7iS= z04HD<#~5~Im%r>6^Vw=^*QWvt<3JT$p6@!6CDAg<_q`V{p1-g(6EmL{2+{QqZ(U=~ zlGPu+|L3?dZ?w<~g3OxXPb=6e(jpmwU^R>VpC0zT+kGV)kO*UXH`>`dCJ2E9=BwWj zCK6${FgN4F{NQ16usGqSG{(o=wSv(mKPId6qbu&7rf|&7RBmQBy_?cDg@L);_-MQGZTt>9>d%e&!BS@| zAB&g08y{_Vxw^kunBHMBe?pkdUw0n=&188pK7W57%KDbcFKZ7|U3I7DhQ9iu+ujwI zDeQlmT7iQ3GnM<_@(lOxwzlauH=5#vf1xq`?)bXht(j@c7wScYcjV>o`mpSdll1}i zm}>=Yc#Q3Da%1Mpc)IKZyW=;yTfo2Zd$(!w&+=%h3sZUE&&}k<^1#@d)7OmB(0afuINbCe(I) zV{T^McIFq~#xaw*v$T!r!+bTK|FoO@!5n6hh%l%amLHZ5%n2|3YXutQSp#?D19y$_ z(RP)k+n>rjrnO`s}--{Qf`0zdj-yKcw-Ql|Znfx0~w!zqd?@PM#J($IXcPY%i zEZ_h1z^@g1Ol|+4@tg8wGTC=#XOF2am>qfKn907Io>$+Q-Sqy_u7zJb-R}@W`8!UQ zcf@Io%VaV)??c4o52#O#V%#1nXgU+|F>@jCcpKZ_J&A z@3MF03-+%5t`!Vm@tMZ>tLZTRq8EaGtY0v9QyVgOxLGr^J1@q*V@d<={Y-i7cC%-3 zywbm3mfe^J;$ivj&b!(ametFDK5R`erNd12{AYbi%)83U;>Nr+5`MbsN-G#{3WIoD znEk*1TOcrh-{|8tGo`?++wTaNU3N3C@eIPM{E6?6zA8c)@KO^scH4!o_z?+Q%*wmn#jm(a1a)TTyWOP%NAtDac1wZ1xhWn_FxWi1+ucgwYJT#~ zK%Cb7e0;;4r?1`W?L2GkmJN~4qeqVV*Kp^l{{GI!Pod5s-l5(hTfH|7pBcC%Y-)se zXkdW%%=z;?=1iS7X}-tI8Os*TU*xgWJ0#REaEtTU;p2yoG{&*O-+OJSH$rdp4si|( zbPn_NcK$oTQ1A6&%>Twfe8iWHh}$_VWbFp;fVCl;o!5qih4`%tH+tC;80NR$I~2)> zggJMo|95_U!@`0ljTphgukFg)aKFHRbQ}R(I`1u^-XjEW3IYW|f=EG#z)#>K@D+p! zoCVVbYXw^c-muMrZHr(7zB>y>3q}e?3H~J*4*OJrKYq@ygbFpjc?&`jF2opm1ANXz z>{}4$R6zvXL-7^>a}gdNK{#Sq3%@f3^9Az+9)daWH4PnaKI}6EGX%>73t(S_x2487 zLyxYu^5reqXbk0y)C1uXhO)6Q|5RQUW<7kE;@^l6 zA+LmC@2nIomJp<|0saGwdEX4TwQyzbeu8x<)8DadK`8dN9==1n>mmd$toB~5jen|b s)(&B4mq{38BT$mA^w<7dxZ%e9{-66Cfg0+{%@$)VvB8fK@L&J^FN3;7EdT%j literal 0 HcmV?d00001 diff --git a/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.eot b/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.eot new file mode 100755 index 0000000000000000000000000000000000000000..e9f60ca953f93e35eab4108bd414bc02ddcf3928 GIT binary patch literal 165742 zcmd443w)Ht)jvM-T=tf|Uz5#kH`z;W1W0z103j^*Tev7F2#5hiQ9w~aka}5_DkxP1 zRJ3Y?7YePlysh?CD|XvjdsAv#YOS?>W2@EHO9NV8h3u2x_sp}KECIB>@9+Qn{FBV{ zJTr4<=FH5QnRCvZnOu5{#2&j@Vw_3r#2?PKa|-F4dtx{Ptp0P(#$Rn88poKQO<|X@ zOW8U$o^4<&*p=|D!J9EVI}`7V*m|~_En`<8B*M-{$Q6LOSfmND1Z!lia3ffVHQ_mu zwE*t)c_Na~v9UCh+1x2p=FeL7+|;L;bTeUAHg(eEDN-*};9m=WXwJOhO^lgVEPBX5Gh_bo8QSSFY{vM^4hsD-mzHX!X?>-tpg$&tfe27?V1mUAbb} z1dVewCjIN7C5$=lXROG% zX4%HIa)VTc_%^_YE?u@}#b58a4S8RL@|2s`UUucWZ{P9NJxp5Fi!#@Xx+(mZ+kdt3 zobw#*|6)Z(BxCGw^Gi+ncRvs|a|3xz=tRA9@HDV~1eqD)`^`KTPEg`UdXhq18})-@}JTHp30^)`L{?* z;c)alkYAc@67|W!7RDPu6Tsy@xJCK8{2T9-fJw6?@=A(w^}KCVjwlOd=JTO=3Zr+< zIdd?1zo-M^76}Jf!cpLfH`+2q=}d5id5XLcPw#xVocH5RVG7;@@%R>Sxpy8{(H9JH zY1V)?J1-AIeIxKhoG1%;AWq7C50ok3DSe?!Gatbry_zpS*VoS6`$~lK9E?(!mcrm1 z^cLZ1fmx5Ds`-ethCvMtDTz zMd=G1)gR$jic|1SaTLaL-{ePJOFkUs%j634IMp}dnR5yGMtsXmA$+JDyxRuSq*)bk zt3tSN2(J<@ooh3|!(R%VsE#5%U{m-mB7fcy&h(8kC(#>yA(JCmQ6|O1<=_U=0+$AY zC)@~M`UboR6Xm2?$e8Z$r#u8)TEP0~`viw@@+){#874R?kHRP|IU4&!?+9Cy52v^I zPV4Xd{9yc;)#l?0VS#6g@ z`#y))03Laq@^6Z#Z*uvzpl{$JzFJgn&xHlNBS|Eb!E@}~Z$^m!a9k34KX zT|VETZ;B_E$Ai8J#t5#kATCAUlqbr&P~-s)k^FfWyz}iK@`B$FI6L0u1uz5fgfqgU zRBmB>F8s_qp1HWm1!aXOEbpf`U?X|>{F`8Md500U3i;Mh9Kvbd(CeuC>077ww4g^h zKgM(A48W`XEDE~N*Th^NqP#S7&^w2Vpq+df2#@A*&4u~I+>t)9&GYcop9OtUo=;2d zGSq?IMBAYZffMC1v^|Z|AWdQ38UdJS4(H(nFI<|%=>0iAn3lvcSjIR(^7r7QuQI0a zm+@Z9QXmf!efG1**%Ryq_G-AQs-mi^*WO#v+tE9_cWLjXz1Q{L-uqzh z-Vb`UBlaT|M;ecG9GQJ&>5)s1TzBO5BM%;V{K#`h4juXPkq?e&N9{)|j&>ZKeRS#3 zOOIZ6^!B3<9)0}ib4L#y{qxZe{ss8}C5PC)Atkb2XK%PS)jPMht9Na0x_5hTckhAT zOz+FRJ-xk0*b(QE(2)^GQb*<<={mCZNczb3Bi%<19LXGc`AE-^-lOcO^Jw^J>ge2~ zT}Rg*O&{HUwEO6RqnV>GAMK$M`~TX%q<>-my#5LOBmex)pWgq|V@{jX>a;k`PLtE< zG&ohK;*_0|<6n-C93MK4I*vGc9shKE;CSEhp5tA|KOBE|yyJM=@i)g?jyD~Db^OKg zhNH*vXUCr$uRH$ec+K$#$E%LtJ6>`8&T-iBTicKH)SNMZS zB8UG!{1{Y=QL&oLMgLzR(}0Y>sN0TqgG|kLqv_VcVSLD)aJ?AC^D!bLa6K5Ut1)YA zghRXq;YBrYhrzOK23vXorq6v~v*CBb?*bYw$l-3J@cY5H}8Gr;t8{e8!J}L*5e>!hOQnM3g=8eoXDiYZBlmBW?=(Qvo;ib;hP4-|5>J zo6*MD%*UW90?aI=ncV;fJZB$fY|a73<^rd=!0(I%TsLE9TH#hRHV<&~b~82~@n<2= z1-*oTQL{zWh}4H zGjX>}SbW{R;(k^VBouiebp<&Q9S1P`GIlM(uLaz7TNt~37h`FJ-B1j-jj@}iF}B$Yhy1^cv|oM`3X|20-GXwq z0QapK#%@FUZ9ik|D}cWpad#li_7EK6?wrrq4l5kOc5H@2*p5ENc6Pxb%`OEl1=q{i zU1`Sdjxcu562^8fWbEEDi1(A=o?`5)DC_=i#vVX^45ZpSrpE35`g>WA+_QYDo!1%Byk?;4A*Y^%H_McC{^)mJp(mf6Mr$1rr8Klp< z@9$&m+0Bd{OfmMH!q^XxU*>tneq@E)#@LU6-}5Nz`DYpXi4*QA#$MRP*w045^)U8x zl=XAu_Y36n%QPIqUi^r$mjH7JWgdEmv0oiv>}BNj>jtO;GSSiGr=LO--M;f3$4%-kcdA5=kp1;?w1)iU%_3WyqWQmjf@AcVZ3xc<7I~# zFHgbYU4b-}3LN4>NEZft6=17@TlH$jBZ!NjjQC2%Yu;hJu9NWwZ@DynQp=tBj8Wjw$e9<5A{>pD{iW zZqogXPX_!HxT$LypN98z;4>ox_a@^r4>R7`&G@Wh#%HG(p9^;e{AczsK5r7^^FxfE z1>DZ=f&=UVl(8@Y2be_)+!n?cUjPUAC8+bcuQI+Aab3F@Uxu=lJpt$oQq38DE=X{7U3=m6P!eKVy6&>UK5q-?WYKFCon} zcwbuv_Xy+HBi;48;XYwJy_)eGknfFvzbOHS_{~WFRt)zJ zijpU?=0x zkwe%IkXL3J<39wBKYX6?A1iQgGX8uw<3E|t_zN{~?=k)}E8{7uHGX6%I@xLJ5o5hU3g}A@9GyXR4dV3$^??m7ZGyeD0jQ;~={sZ6d0>}3fa8JQ~ z#Q6Kj>z^jLM;Px_;9g|>2lp6?Oy32JW8UD|ZH#LugXW9=mzl&9Ov2uUBsVZgS;-{zFeKKwOfnbOFe$i&Nu~HMe}YLB^Wk1(Qs^2cg^_pF zV@!&4GARo9*fb`^0bBDClWMmysSaUvuQREB7n2(BZbV*M)y$0@8CXG!nX&m5FyO}f|^_bYrq)EtQ3jEW$ z;E;a$iwt`}|2xOlf`@fNIFLzjYz@1@vMcQB;TbKpR_b1>hK{W@uw#sVI6JqW86H;C ztQ;P%k-Nf8ey^cATop^SG>2V0mP~Z;=5SL5H#}UQ-NIABSS;9=rYBEjx70^!0%|%? z6H%vBBRb1si5UK{xwWyrI#6mdl~NhlB{DFSQ4f#HYnQ4Tr9_9++!S!BCwdbtt-PhV z2|9^MD=%7f(aK494ZCcz4t6dY`X;_62ywrIPovV+sT0pH?+{mwxjh%^> zh_?T`uiv2^KX}>z4HVY!Y%V1QDcBvi>!sD@MEbj99(bg@lcBxTD9~gYzfIm>7jFFl;^hEgOD8Clhu+6jw>0z&OhJ=2DoJ42R3QaA zWOOLCseE6;o!xG!?ra~f^>o~D+1yBE?qxT0^k{Eo?@YU;MW)Dk7u-Ja^-t=jry`Nm z^!iU;|I=I9eR|&CLf`eUDtM5Q2iZ}-MO8dOpsgMv)7Ge`r77T1(I!FduCuw%>+xyh zv~lQApLDjitE7#8{D!C9^9KL8O}^S6)E?BVMw_qP`rdoia-YG@KjOf%Qh4Bnt8Mcoi9h#JRYY3kEvn*UVbReO50BrmV+ z;MZw4c4)uX7XS38vL%mZ(`R5ww4GL|?R_+gqd5vmpyBRdmy(bdo1(0=sB8@yxdn)~lxbJjigu9=)pPhNBHJ@OCr@Hfy7 zMKpelG=3bck_~6$*c^5qw$ra?cd)OqZ$smlOvLJWm7$z_{bM*t_;dW+m52!n&yhSI z0)LYKbKpO(yrBb!r(;1ei=F17uvjq5XquDp?1L{4s1~Hu@I46id3j>UeJTcx0fQ!$ z&o9RBJJn}4D52n3P@|_Z2y%SzQ!WJ22E$LC;WNiX*{T?@;Pj!}DC|#~nZ>-HpIS<2 za>P22_kUiz%sLYqOLTT7B=H>lmeZ$;kr+*xoe54)>BRz1U!muO7@@$$G=552gn*!9 zJ(lYeq-%(OX#D?e|IqRz)>flsYTDXrc#58b-%`5Jmp#FEV%&+o&w?z>k%vUF^x&@! zd}aqf<-yN_(1OoX0~BNi5+XV}sW1Mo_rky5sw&#MPqeg*Iv+ow^-qi|g!>=1)d@|( zIJ=tJ4Yw%YfhiFbenxIIR1N1mmKeveFq!eFI?k+2%4<3`YlV3hM zS45R<;g^uVtW5iZbSGet@1^}8sBUEktA@_c>)?i}IE-EQTR@N-j%b9$Syc1{S3U?8e~d3B1?Lij0H27USiF&gR}A>wG-vBGIPuh*4ry;{Khxekv}wCTm%_>vhFZSJ)Pw2iv6Q4YVoQ`J2w?yCkiavVTWeVa)j|q=T9@J0pTtcQX!VHnIM6Al- z^*7Og!1y$xN4)5fYK&2X5x-Om4A;1k20|=O+$wl^1T}IRHkcq<^P$a{C0fAii(ypB z{ef1n(U1a&g|>5}zY?N{!tOqN_uYr3yPejjJ>KeR7IW!#ztw(g!*Hj~SpH|bkC%t5kd^Q2w*f{D8tJPwQ z++kT&2yEHVY_jXXBg!P7SUbSC;y1@rj$sqoMWF2=y$%ua1S%Nn_dvGwR*;O^!Fd?1 z8#WkKL1{>+GcdW?sX2^RC#k8D;~{~1M4#fpPxGDbOWPf?oRS^(Y!}arFj}-9Ta5B$ zZhP0#34P$Fx`;w}a*AU%t?#oPQ+U$umO}+(WIxS!wnBcQuM;%yiYhbKnNwXa7LiRjmf+(2(ZG}wiz%sgWJi>jgGIsPnZ=KfX?8mJ2^L!4-hBx#UR zZa((80+3k2t!n9h@La(dm&Qrs_teRTeB}Y= zShqm6zJdPGS+juA6^_Mu3_1sz1Hvx#*|M6pnqz`jk<&F@Wt;g%i&gunm7lM5)wE@q zvbn6Q=6IU;C_@UMWs|fmylAcBqr(MowarQT7@9BsXzyH534G z1e0`Rlnqb_RAIW{M7dQoxdg$ z;&VZRA?1jrgF9nN0lg?)7VU>c#YI}iVKVtMV&I^SUL2sA9Xn2<8mY@_)qZF;^OV!$ z;QVMjZTMUtC^eDXuo)DkX75sJ*#d6g{w?U1!Fbwid(nlSiF_z zStRqVrV`8MJBg{|ZM^Kzrps2`fI(Eq&qUZ%VCjWLQn)GthGkFz0LcT(tUy)_i~PWb ze1obC@Hu0-n}r4LO@8%lp3+uoAMDWnx#|WFhG&pQo@eXSCzjp(&Xl4$kfY60LiIx^ zs+SA=sm(K<-^V>WxOdf!NXC0qN&86q?xh#r;L)>)B|KXvOuO+4*98HO?4jfcxpk`^ zU^8+npM|PWn*7Nj9O_U%@pt)^gcu2m|17^}h}J6KWCJ>t zv@Qsc2z0711@V0%PDVqW?i)a)=GC>nC+Kx~*FeS}p5iNes=&dpY_lv9^<|K`GOJMG zE5^7&yqgjFK*qz6I-su3QFo4`PbRSbk|gNIa3+>jPUVH}5I6C)+!U&5lUe4HyYIe4 z>&a$lqL(n;XP)9F?USc6ZA6!;oE+i8ksYGTfe8;xbPFg9e&VVdrRpkO9Zch#cxJH7 z%@Bt~=_%2;shO9|R5K-|zrSznwM%ZBp3!<;&S0$4H~PJ&S3PrGtf}StbLZKDF_le= z9k)|^Do10}k~3$n&#EP*_H_-3h8^ZuQ2JXaU@zY|dW@$oQAY%Z@s0V8+F~YQ=#aqp z=je#~nV5}oI1J`wLIQ^&`Mj01oDZ;O`V>BvWCRJd%56g!((T@-{aY6fa;a0Vs+v@O z0IK2dXum&DKB?-ese^F~xB8#t6TFirdTy3(-MedKc;2cI&D}ztv4^I%ThCj* ziyQ90UpuyI`FYm%sUlWqP(!Qcg-7n%dk-&uY15{cw0HD+gbuz}CQP*u8*(+KCYFiz80m1pT=kmx0(q(xrCPMsUH1k{mefDSp) zD5G^q?m1N%Jbl&_iz65-uBs{~7YjNpQ%+H^=H7i%nHnwimHSGDPZ(Z;cWG1wcZw|v z%*juq&!(bo!`O7T>Wkon^QZ-rLvkd_^z#)5Hg zxufObryg!`lzZc#{xRRv6592P5fce0Hl-xEm^*nBcP$v z0`KR64y6=xK{a*oNxW9jv+9)$I9SxN-Oig_c%UK7hZDj_WEb$BDlO#*M?@b>eU7 zxN!%UE+w#Wg$bqFfc# zeDOpwnoY)%(93rx(=q9nQKg6?XKJZrRP#oo(u>h_l6NOMld)_IF( zs6M+iRmTC+ALc}C7V>JEuRjk9o)*YO8Y}oKQNl2t?D;qFLv4U`StSyoFzFYuq>i@C zEa1!N?B0BK0gjTwsL04McVmu=$6B!!-4bi1u_j7ZpCQm-l2u7AlYMmx zH!4a*@eEhENs{b-gUMy{c*AjMjcwAWGv@lW4YQtoQvvf*jQ2wL8+EGF4rQjAc;uiEzG%4uf z9wX{X3(U5*s$>6M z)n+q=_&#l6nEa|4ez8YOb9q{(?8h1|AYN<53x+g()8?U_N+)sEV;tdoV{pJ^DTD)ZvO|;^t&(V6L2z~TSiWu zI&#bLG#NGMHVY^mJXXH_jBGA?Np1q;)EYzS3U=1VKn3aXyU}xGihu`L8($R|e#HpJ zzo`QozgXO&25>bM*l>oHk|GV&2I+U-2>)u7C$^yP7gAuth~}8}eO^2>X_8+G@2GX0 zUG8;wZgm*=I4#ww{Ufg2!~-Uu*`{`!$+eE)in1}WPMJ%i|32CjmFLR8);bg^+jrF* zW0A!Zuas6whwVl!G+Vp(ysAHq9%glv8)6>Sr8w=pzPe1s`fRb9oO^yGOQW^-OZ=5? zNNaJk+iSAxa}{PtjC&tu_+{8J_cw=JiFhMqFC!}FHB@j}@Q$b&*h-^U)Y&U$fDWad zC!K&D&RZgww6M(~`@DA92;#vDM1_`->Ss*g8*57^PdIP-=;>u#;wD4g#4|T7ZytTY zx(Q8lO+5Ris0v-@GZXC@|&A*DPrZ51ZeSyziwc>%X>dNyCAL zOSDTJAwK7d2@UOGmtsjCPM9{#I9Gbb7#z25{*;Tyl-Zho(Oh~-u(5CLQl;2ot%#Nl z_cf{VEA=LuSylKv$-{%A=U+QBv0&8bP;vDOcU|zc3n!Nu{9=5j6^6DL&6tm-J4|~) z9#1w(@m3N|G3n9Xf)O<|NO+P)+F(TgqN3E#F8`eIrDZn0=@MQ%cDBb8e*D_eBUXH+ zOtn|s5j9y2W~uaQm*j{3fV=j|wxar?@^xjmPHKMYy0eTPkG*<=QA$Wf)g`tfRlZ0v ztEyRwH(8<%&+zbQ+pg>z^Ucf8Jj>x$N*h{buawh;61^S+&ZX>H^j?#nw!}!~35^Z# zqU|=INy-tBD+E^RCJdtvC_M2+Bx*2%C6nTfGS!1b*MJvhKZZPkBfkjIFf@kLBCdo) zszai4sxmBgklbZ>Iqddc=N%2_4$qxi==t>5E!Ll+-y(NJc+^l)uMgMZH+KM<|+cUS^t~AUy&z{UpW?AA~QO;;xntfuA^Rj7SU%j)& zVs~)K>u%=e(ooP|$In{9cdb}2l?KYZinZ8o+i;N-baM#CG$-JMDcX1$y9-L(TsuaT zfPY9MCb3xN8WGxNDB@4sjvZ10JTUS1Snvy5l9QPbZJ1#AG@_xCVXxndg&0Cz99x`Z zKvV%^1YbB2L)tU+ww(e6EZYzc6gI5g;!?*}TsL=hotb0Mow8kxW*HVdXfdVep4yL` zdfTcM*7nwv5)3M-)^@ASp~`(sR`IsMgXV>xPx0&5!lR8(L&vn@?_Oi2EXy)sj?Q8S$Mm zP{=PsbQ)rJtxy*+R9EqNek1fupF(7d1z|uHBZdEQMm`l!QnDTsJ_DX2E=_R?o*D5) z4}Rh2eEvVeTQ^UXfsDXgAf@6dtaXG>!t?(&-a~B^KF@z*dl$BLVOt|yVElz!`rm5n z&%<$O{7{?+>7|f%3ctTlD}Sc0Zs_hY;YO-&eOIT+Kh%FJdM|_@8b7qIL;aj#^MhF1 z(>x4_KPKYTl+AOj0Q$t3La4&;o`HP%m8bgb`*0vs83ZT@J#{j%7e8dKm;){k%rMw* zG9eKbw_mh1PHLUB$7VNcJ=oL;nV~#W;r|rv;ISD5+Q-FH5g~=&gD`RrnNm>lGJ1GE zw`K+PW!P*uxsEyAzhLvBOEUkj>)1sV6q-RhP*nGS(JD%Z$|wijTm)a5S+oj03MzBz zPjp$XjyM!3`cFtv`8wrA`EpL(8Soof9J(X7wr2l^Y-+>){TrmrhW&h}yVPonlai>; zrF!_zz4@5^8y@95z(7+GLY@+~o<>}!RDp|@N4vi4Y-r@AF@6Q7ET8d9j~&O$3l#Yuo`voKB12v8pK*p3sJO+k{- zak5sNppfOFju-S9tC#^&UI}&^S-3TB^fmi<0$e%==MK3AqBrn!K@ZCzuah-}pRZc{ z?&7p`mEU5_{>6x=RAFr4-F+FYOMN%GSL@mvX-UT3jRI;_TJH7}l*La_ztFn+GQ3;r zNk;eb?nh&>e?Z$I<$LDON!e1tJ26yLILq`~hFYrCA|rj2uGJHxzz@8b<} z&bETBnbLPG9E*iz!<03Ld4q;C140%fzRO5j*Ql#XY*C-ELCtp24zs*#$X0ZhlF~Qj zq$4Nq9U@=qSTzHghxD(IcI0@hO0e}l7_PKLX|J5jQe+67(8W~90a!?QdAYyLs6f^$ zgAUsZ6%aIOhqZ;;;WG@EpL1!Mxhc_XD!cTY%MEAnbR^8{!>s|QGte5Y=ivx6=T9Ei zP_M&x-e`XKwm+O(fpg~P{^7QV&DZPW)$j@GX#kClVjXN6u+n=I$K0{Y-O4?f;0vgV zY+%5cgK;dNK1}{#_x-Zyaw9sN`r9jST(^5&m&8IY?IBml#h0G3e?uSWfByzKHLe8) z9oCU{cfd~u97`w2ATe{wQPagk*)FX|S+YdySpplm-DSKB*|c>@nSp$=zj{v3WyAgw zqtk_K3c5J|0pC zSpww86>3JZSitYm_b*{%7cv?=elhCFy1v6m)^n?211803vG_;TRU3WPV`g7=>ywvsW6B76c-kXXYuS7~J+@Lc zSf%7^`HIJ4D|VX9{BlBG~IV;M->JId%#U?}jR@kQ&o5A3HyYDx}6Nc^pMjj0Jeun)M=&7-NLZ9@2 z)j60}@#z8oft^qhO`qgPG;Gf4Q@Zbq!Fx_DP1GkX<}_%EF`!5fg*xCsir}$yMH#85 zT3Y4bdV)bucC=X;w24>D>XjaA@K`En^++$6E!jmvauA$rc9F%b=P&f^I7M+{{--HM z0JXFl21+}*Oz8zr@T8JQp9Td0TZ7rr0+&rWePPKdaG}l-^)$@O*ON;2pkAjf4ZSg# zy{PLo>hhTUUK_q5L{o!vKb^7AIkbXB zm3BG{rbFE>fKfZsL4iKVYubQMO_AvYWH<3F_@;7*b}ss*4!r5a-5Mr{qoVbpXW1cja+YCd!nQ3xt*CEBq_FNhDc93rhj=>>F59=AN5 zoRmKmL))oDox0VF;gltwNSdcF9cb*OX3{Gx?X{Q-krC~b9}_3yG8Bn{`W6m}6YD#q zAkEzk)zB|ZA2Ao`dW^gC77j#kXk7>zOYg~2Y0NyG9@9L)X=yRL!=`tj7; z^S=K3l)dWTz%eniebMP!Z)q@7d(l_cR;2OvPv7I~Va{X>R@4XXh- zOMOMef=}m)U?`>^E`qUO(+Ng$xKwZ1|FQ|>X41&zvAf`(9 zj3GGCzGHqa8_lMGV+Q3A(d5seacFHJ92meB0vj+?SfQ~dL#3UE!1{}wjz|HPWCEHI zW{zYTeA(UwAEq6F%|@%!oD5ebM$D`kG45gkQ6COfjjk-==^@y6=Tp0-#~0px=I@H# z7Z|LQii;EBSfjse{lo}m?iuTG`$i6*F?L9m*kGMV_JUqsuT##HNJkrNL~cklwZK&3 zgesq4oycISoHuCg>Jo;0K(3&I(n-j7+uaf)NPK7+@p8+z!=r!xa45cmV`Mna1hT=i zAkgv-=xDHofR+dHn7FZvghtoxVqmi^U=Tk5i*(?UbiEGt9|mBN4tXfwT0b zIQSzTbod84Y<){2C!IJja=k65vqPM|!xFS?-HOK!3%&6=!T(Z$<>g6+rTpioPBf57 z$!8fVo=}&Z?KB-UB4$>vfxffiJ*^StPHhnl@7Fw@3-N|6BAyp|HhmV#(r=Ll2Y3af zNJ44J*!nZfs0Z5o%Qy|_7UzOtMt~9CA*sTy5=4c0Q9mP-JJ+p-7G&*PyD$6sj+4b>6a~%2eXf~A?KRzL4v_GQ!SRxsdZi`B(7Jx*fGf@DK z&P<|o9z*F!kX>I*;y78= z>JB#p1zld#NFeK3{?&UgU*1uzsxF7qYP34!>yr;jKktE5CNZ3N_W+965o=}3S?jx3 zv`#Wqn;l-4If#|AeD6_oY2Y||U?Fss}Sa>HvkP$9_KPcb_jB*Jc;M0XIE+qhbP$U2d z&;h?{>;H=Sp?W2>Uc{rF29ML>EiCy?fyim_mQtrgMA~^uv?&@WN@gUOPn(379I}U4Vg~Qo)jwJb7e_Pg^`Gmp+s5vF{tNzJVhBQ z$VB8M@`XJsXC!-){6wetDsTY94 G*yFsbY~cLNXLP73aA74Mq6M9f^&YV`isWW zU@CY~qxP|&bnWBDi{LM9r0!uDR`&3$@xh)p^>voF;SAaZi_ozepkmLV+&hGKrp0jy9{6cAs)nGCitl6Cw2c%Z0GVz1C zH-$3>en`tRh)Z(8))4y=esC5oyjkopd;K_uLM(K16Uoowyo4@9gTv5u=A_uBd0McB zG~8g=+O1_GWtp;w*7oD;g7xT0>D9KH`rx%cs^JH~P_@+@N5^&vZtAIXZ@TH+Rb$iX zv8(8dKV^46(Z&yFGFn4hNolFPVozn;+&27G?m@2LsJe7YgGEHj?!M`nn`S-w=q$Y4 zB>(63Fnnw_J_&IJT0ztZtSecc!QccI&<3XK0KsV4VV(j@25^A-xlh_$hgq6}Ke~GZ zhiQV3X|Mlv6UKb8uXL$*D>r^GD8;;u+Pi;zrDxZzjvWE#@cNGO`q~o7B+DH$I?5#T zf_t7@)B41BzjIgI68Bcci{s-$P8pU>=kLG8SB$x;c&X=_mE3UN@*eF+YgP|eXQVn) z)pd&9U^7r1QaaX{+Wb-9S8_jQZC19~W) z*_+RuH*MPD=B_m7we#2A@YwQv$kH2gA%qk7H)?k!jWbzcHWK497Ke<$ggzW+IYI2A zFQ_A$Ae4bxFvl4XPu2-7cn1vW-EWQ6?|>Qm*6uI!JNaRLXZFc5@3r48t0~)bwpU*5 z-KNE}N45AiuXh{&18l_quuV$6w|?c-PtzqcPhY)q{d+Hc_@OkartG`dddteZXK&Je zGpYJ-+PmEUR`sOnx42*X$6KT~@9ze#J>YvvaN24jI}4QG3M;w<>~!2i@r)9lI!6N1 z0GN((xJjHUB^|#9vJgy=07qv}Kw>zE+6qQns-L}JIqLFtY3pDu_$~YrZOO$WEpF>3 zXTu#w7J9w+@)x-6oW(5`w;GI8gk@*+!5ew8iD$g=DR*n@|2*R`zxe7azdr7~Z;$%< zSH@*lQ9U(Hx^%Fb|1?Smv({(NaZW+DGsnNWwX(DFUG8)(b6Rn>MzUxlZhNbVe>`mS zl&aJjk3F~9{lT-}y>e~pI}kOf@0^%Vdj&m(iK4LTf6kmF!_0HQ$`f-eBnmdTsf$_3 zR`hz2EjKIKWL6z@jj1}us>ZmY)iQInPifzSiOFN92j9$pX*CuV8SPrD#b%Qa97~TI zS6)?BPUgFnkqG8{{HUwd)%ZsvurI~=Jr8YSkhUA!RANJ;o|D->9S9QB5DxTybH&PGFtc0Z>dLwr|Ah}aX`XwTtE&UssYSEILtNijh)8)WWjMm$uT;+p1|=L z><4lEg%APBLn+FRr&2tGd)7icqrVXFE;+3j`3p~mvsiDMU>yK$19$B@8$Dy4GClfzo4)s_o2NuM3t-WhCrXE>LQ z_CQtR*!a0mhnw#I2S=WxT_H@^Saif`)uhLNJC zq4{bSCwYBd!4>6KGH5y~WZc@7_X~RqtaSN(`jfT!KhgGR)3iN50ecR$!|?Vq8|xa+ zY#*+B=>j4;wypclu7?wd+y06`GlVf2vBXzuPA;JgpfkIa1gXG88sZ*aS`(w z_9`LL4@aT0p!4H7sWP`mwUZRKCu@UWdNi-yebkfmNN+*QU+N*lf6BAJ$FNs^SLmDz z^algGcLq`f>-uKOd_Ws4y^1_2ucQaL>xyaQjy!eVD6OQi>km;_zvHS=ZpZZrw4)}Z zPz(rC?a`hZiQV9o^s>b?f-~ljm1*4IE<3plqCV}_shIiuQl=uKB4vUx2T$RCFr0{u z1v660Y3?>kX@{19i6;*CA}pJsFpo{nculW61+66XAOBZD< z{H|h`mJS5C2;ymL##}U*MC%fL0R97OSQ@lUXQ-j?i{z{=l-!$64H{LlTLo{Ln<|OV zBWq*5LP`KJl74fC{GzzP_Z;;;6i--QpZUrtHC@+RBlt+=_3TyV4gk=4b{TBJAx!GehYbTby(&-R337 zQ%g2)Uc&K|x|eL0yR*VCXDBqZ89C(obOFYYht(k`^q0OaQ*Y{)@7xE~KQ7XN)hGlZ zl5$1<#s!tyf%>mbIG(9WR`R*{Qc_h(ZGT^8>7lXOw^g1iIE2EdRaR^3nx_UUDy#W6 zy!q(v^QLL*42nxBK!$WVOv)I9Z4InlKtv#qJOzoZTxx86<5tQ*v528nxJ^sm+_tRp zT7oVNE7-NgcoqA#NPr*AT|8xEa)x&K#QaWEb{M34!cH-0Ro63!ec@APIJoOuP&|13 z9CFAVMAe@*(L6g{3h&p2m!K zEG?(A$c(3trJ5LHQ@(h3@`CB*ep}GDYSOwpgT=cZU;F&F6(b=V*TLLD z*fq(p>yRHTG1ttB*(Q8xLAl4cZdp^?6=QjcG;_V(q>MY0FOru|-SE}@^WElQTpCQZ zAMJy_$l;GISf1ZmbTzkD(^S!#q?(lDIA?SIrj2H$hs*|^{b|Kp!zXPTcjcCcfA+KN zdlV!rFo2RY@10$^a_d*-?j7HJC;KhfoB%@;*{;(hx_iP`#qI(?qa{b zH|YEvx~cE^RQ4J}dS>z%gK-XYm&uvZcgoyLClEhS(`FJ^zV!Vl&2c{U4N9z_|1($J znob`V2~>KDKA&dTi9YwyS#e-5dYkH?3rN(#;$}@K&5Yu}2s&MGF*w{xhbAzS@z(qi z&k99O!34}xTQ`?X!RRgjc)80Qud0{3UN4(nS5uZ1#K=^l&$CdhVr%4<67S=#uNP z$hnqV471K$Gy&){4ElZt?A?0NLoW2o_3R)!o~sw#>7&;Vq954STsM(+32Z#w^MksO zsrqpE@Js9$)|uQzKbXiMwttapenf8iB|j(wIa2-@GqE@(2P#M09Rvvhdu!sE0Mx&cK&$EtK}}WywYEC~MF5r3cUj%d$|lLwY4>`) z_D++uNojUl@4Cz8YF3nvwp>JWtwGtSG`nnfeNp(_RYv`S2?qhgb_(1$KD6ymTRgnD zx^~3GBD2+4vB9{=V_iMG*kQTX;ycG^`f{n+VxR4Ah!t~JQ6Z?Q;ws}Jw|#YE0jR0S z+36oq6_8xno^4J?Y02d!iad3xPm+8~r^*Vvr4A<|$^#UEbKvJ9YHF=Ch2jF`4!QS# zl8We8%)x>ejzT^IH%ymE#EBe2~-$}ZXtz&vZ_NgVk4kc zOv-dk(6ie2e{lAqYwn9Q$weL#^Nh?MpPUK z#Cb)4d96*6`>t7Zwsz#_qbv6CnswLS9Jt|b`8Mqz?`?H1tT99K#4#d+VwAy}#eC74 z;%UFxaNB!Zw`R9){Pncrny4>k;D}TV2BU0ua-+Fsp>wmcX#SGkn`h0O`pN*`jUj8q zIlnc7x6NRbR)=wP1g`-}2unC>O6ow=s{=NV6pfEo3=tY8 z=*$TKFk8Wv0K8B_**m*Q>+VW*1&gD#{#GSc(h#YQL?*<(ZUx~>L^RyAG3}j0&Q|mJtT7ec|Y7cr~ z+A`Wz!Sqz9bk0u-kftk^q{FPl4N+T(>4(fl@jEEVfNE$b*XSE)(t-A>4>`O^cXfrj zd_nrA-@@u?czM(o3OVDok%p3(((12`76;LwysK$;diTl$BdV)!p5Gj=swpb=j2N>b zqJ1D5E#zO9e(vJ6+rGuy<(PS-B6=gHvFat&)qr%j7T`vT1ju zIvHwGCk5)id{uDi@-e?0J*(-W-RGZs)uhSeqv7TA&h|CUx(R0ysoiQC8XnxL&RXI3 zO`H`8Pe&^ePw*`{rIJhzUg@MuhUL`IONG^*V?R0h5@BRDFgEF45b0jSrg0r{<4X)nw^c)uQ_Ai_p>ic!=K$pmnyqYb=`6fUo40ru#Gh= zMRJxOD(1n?Mjz_|IWyJK5^fh3*n>eI0MmEKq%=-oIdGd4F-LT>RL)Bp5FWxb4aNLNXB^o?YBSXQ`SwN zI*N~(CQW~P$HpzwrMG4IZKI>TVI4nQ$a-#)zV}LE(xgQ5MG@L#e!e@ ziNtg{Ph&qpX9FLaMlqMh>3)Nu%sAO#1NEsbe=#4Vqx0Y;<~+mV!xwj%}Z=xZn= zSqjxSH4T~v>Xd*=2wmHPN?@+9!}aQz-9(UIITZ==EB9}pgY1H4xu^-WdOFSK!ocZc zd-qhN$eZcN#Q^0>8J%)XI$4W(IW6R810*ucIM7Q#`twI|?$LYR1kr>3#{B{Z4X(xm&Cb21d^F9MKiD=wk_r+a=nyK!s^$zdXglCdshbfKBqa5aMwN#LmSNj6+DPhH4K-GxRl;#@=IJc zm{h}JsmQFrHCioWCBGzjr5p9L4$t4`c5#Cz(NJ#+R7q-)Tx2)6>#WZDhLGJD964iJ zJXu`snOYJYy=`<+b*HDiI9XPo8XK$TF86)Ub5=NC@VN#f$~GDsjk01g$;wDY!KqOh zC$x={(PT7CH7c?ZPH{RNz}Tel$>M0p;je4|O2|%Yq8@sCb7gRhgR4a*qf+WGD>E8~ z`wb<@^QX)i-7&*Z>U6qXMt_B2M#tzmqZTA1PNgzcvs|(|-E z4t*ZT-`kgepLl0g1>H!{(h8b`Ko=fR+|!L_Iji>5-Qf34-}z%X8+*Qwe^XrIS4Re$ zWUblH=yEfj!IgeIQ>m}+`V(4u?6c;s&Ym_6+pt|V`IQ1!oAC@R1XC3tL4BQ7`!TnU zWaoqG=nhI@e7dV7)8VzO8ivuC!q{hcxO7fo#2I=<`rktP0OfAO-CQE!ZT@}e7lw;{c) z@2l7RV$@&S5H@{=Bj~^Kp5At=Jq=Y92rXP@{-D4j>U=-a^gM2s-nIZA;u=fbm2BP=Zca5W81_cA>Tr z)x+r@{pu_la2Q(wm`Zqyd@GhNDNT&4oNHb_>w4{jIU}m&iXykMxvi;WL8;y7t}cp& z9CEpR)WlI1qmOq!zg4QTmzv#eP3>NLd7V-+YKmuyLFP533rd>WnvL$F3b}g39PYk; z)^hXQ%5jO(B}-TMio7@t<(V?7M5!ycd)u4Z+~!hym9+KwPVO^Wkhi^Dc7$R@)o$oh z^mRbgQ@5EvalJa}V4Bi3cs^w5pYtbXXz5W|e%+z-K;8M%Lf~BlZRvNI7=)cG6lbjg z?)l8iOw!mU`uaKN@UL4>d#edM9^-ePb(VICy6Cg-H^Ew$n_s801w`A83W!_Z{D+1G z(<9A>WB@>)D%cxw7c?Xv7N}6gg?&TkLX|0@k&VL)YMI~SsE^dzj2^3BKL7SM$!0Lt zj;ytKWw|(58n6_NNH$JVRh!W*wewMr7)H2jOCruuJAIIfPMFpf6j=hL!D3nVT9Dpo zut}|VoG<%v&w;HrQtz<%%T&X##*z5{D!!egoRN}R_Xxuy+E3dhx6!7mlNyuqsKR-P zlP#8EKGt{Ij~8kXY?&*%q)PkPG;rziWPd>HefyPwV49!>f&Q_@Fn{8Cyz{HCXuo+( zJMu<#{Tl}^-dh%nM0IrDa@V zMHgAog4`tk;DNK-c{HwRhx%Fn%ir3mex!XeZQ4QY)vQ_iZ(j4-GcO?@6Z-Y*f?u7_ zmf!}WRoGkI#BO9;5CFvMobtV@Qm?#eNKbbX!O@xEVhnm z6LFnWu=E}6kB82ZEf!g}n5&IuivccTHk-_5cazDAe+O!_j+dQ~aUBy~PM34Eq0X-LOl zjunFnO<4Nq|BL`!xwvyj&g9Q0(A_*xLT~l{^nM&kGzB7+^hP^L&bD7iVdXe3wobJXVX~o*tX$ zI5xthE?gAl!4+v~+ASbN2nYIqNn_#3>!fi2k=g*Hg_%caA#plNQR+RtHTiW>(*OFG*-nzu~6DMCrX>xzP`3sj}D!||8 zf3dk-w(NCUMu^C%k|t?sa>9gU_Ms-R2Hhm~4jNfPPyH!3Zy zV0QFf=MWK%>|(eV$pB5qOkC)uou{oIJwb_i4epV{W95%N)`+uOrLx7fNtD^czsq4B znAWb+Zsk|YX}a?b+sS-!*t2w1JUqU6Ol`&Jrqa5=4eeLWzr1DX1fWW`6MYf+8SOW< z+EMJ|fp${RJ7q9G7J+`pLof$#kBJP^i@%wNnG3fnK?&k>3IUVo3dbs9Nt)x_q|wIB zlBAi#1Xv-<+nr<13SBfkdzI?dJ|3~?-e>MzG(yRsA}I_oEd{HEGZ&7H|Km9mEbL6r z{Ubhh;h6_QXN_?>r(eWJ@CM1-yn6Y#am!aXXW!EfCpu}=btdYT?EJ>j+jeuc%;P2g z5*J%*$9La$^cy>u0DqjO#J%*IdaaPnAX#A6rRQ+sAHhY@o32==Ct3IF&sM14!2`FD zA))>ZKsccTyp$U0)vjABEY_N5lh(@e+Gj>sYOTgf?=82K)zw-?JX2d$x}n2Y0v%SjDtBXDxV2TyyxQmN?2%8zkKkKF*!AA$P$1#qrF%fUu~URt`tp3C_(>^tkcbHhO0Hh0A zpTVQR{DjsD=y-Bsl#nuTVKRxYbjpSJg|K+SEP+^Y*z3S9p(_-s9^YP5Zc?Vz*o(Qx z?f03co`dGfW}0T>UdEZaW>s0XVEzlw@s&bc+B-9;^^AGsx$AE~!1-7?tn9z|p4}_? zRsM&sjg1>#Rb#6jFBRKMeZ>I_4<%=&rF3yqUD&Lik@7<@2*(0rC)UqPj`Gfe8L&{S zhGtB67KhF{GnLZCF}gN0IrIPU_9lQ)mFNEOyl0tx-!qeCCX<;7*??>lNC*Q7`xe43 z2$7wD3MhiII4W*v6;Y775v{FSYqhp+|6)6BZR@Rdz4}#KZR4%=+E%T%_gX8-9KPT4 zo|$Aa1ohtUet#uro3p&@^FHhEX`OcGjq==$UeAQ~<6AZzZ|l75nn<#}+mo0rqWv5$ z1N<|1yMgX+Qmz?53v|%P=^&74bwqfH?xIC`L()W{|G`j^>kbs7q<$hb6fL@S za#nHyi$$TJ7*i!6estChR}QriMs#yy!@Po#AYdeWL~* zUR%)FT#4Q~O-N!O&it}b8zFOmbe=egH*Ka<9jT?dFCMAcagAo<>tKrW%w?P_A_gd& zXwHTn>a>WEWRzimu7EJ*$3~Jfv|@bLg}6iH4mgJB!o60eP#_N!xYrQoMf4&rGLau~D9ila zYGD*3*MNN?v*n6op+dQM!Kkr@qH1|^ zh7skG&aC;+$C$OSR2!ke>7|B6JDpjV%$Jo5hI14PGyx1I=Diw7>h@vzL?PLTzC;`; z?}nkmP%J6$BG!9mxz?+Np zIHbVy&<#H&Ekz1(ksSJ_NDQ+XHyg-!YcW8YvE5v*jFQ->F;|Q-IB@Mw6YP~v=jY$~9n@~8MVO{1g z@g=-I$aXs1BH&>hK(~|d>Y9n*;xRm&07=pLuqVYV-bwyCUIKgMdLSrovEs2f3{b z<++d|UX&}*7)y8){Ntc{RL*udOS8r%JV4EZ64fUF85n7%NAWejYbLV}NB|lS>SnYN z?PFpysSR*OodDcNK;OVKsSbKS^g;|bSdogA=};1?3rYq|Nc_tR!b2ln>=bNTL59uS zZjF^Y1RoS7qF^>LEqt<#Mu0ZjpiUNLtsc5%t*8}5lW4OWwFXfqGn-q~H)5}2mSRZ^ zKpfQxOe+KC(M5V`tz1zQ)@pTTQ2?NgStmwpvPCi&U9wd)m<^I-w&{(`Vb?Q*4ApV5 z(G}DMfgox!S_C+OTa5UkEbB#G$SC<8vLrDPPT_Uq5N~7`%Js5Ut3!o!f@HJm?b;(N zbbv90V6J7=E&)E`b|}N4n`VOOuvo$IEMx`%EkX8mpug0yY80enF3?M57gI zQ((b(;dv_v7PDKFgL|6)q^sb%Gp_aU)wp^uX96>jGEsOmBhyuDZ8}+y{bG?UqGqyDfYMtJ{6@xXI>fVC9g+uG zbQzl4fY>P6VAkv8GEpapl2>quqSIoui)Mr95Nuw@voGBux%Mq zYqG!&A9RXvoI%gZRwI->g2SYPB1tbg0U9UkC70cRFPTKU0L{E!2e?|as;p-wNwA;> zm}yKfYURNzE545Jz^T+srPZUGX{3qx0H&3ol`)Eow3xXj!2lx+DkB=}EoF`(n^)2W z_26hljpwvSdw}akJQN9;WAQnnHTN=3Ko19hR`Qqt#60*^1acxN84Oi8W-4nXd^@w0 zVpMzKqWw_(cHwQ`*uQ>F4F;Ncc?}XU{q867ZF>zihsu1j_i%f38%41S53RkO-5Bq< z<^ffy6fQNDn;z=lDz2OXjU+MMr0ziZ)HseHI3+}-N8v$8UWEK_n5pL6VPUS@YH^ z-F?^bJ%5Vt}@l0B2B$XfpF!7J0KUW$rc!~hPD3+Ms%)ia=pl{0nuS0_) zMk9rt16uqE&;%{gtVGqhUs{u$%()O~zzC_11`vYVVXfdfEU}YwTDn~JYTSiTDRNih z4#ap?$m%48h4*c`rhEH7?VLTW9aCi~b>z~)W0xM$c|y(8H%u~4?Yic=Yr3WyCvBMC z9P;P}Ra`!CY1TVd3~%qgX48EO<*6O5d**2Osm_lAM&ZKw?7XUKU$o?gjCIcqH|%NJ zuxtIAj>_t$YW%D0ShIfD2DzU5%qnHsRN0vm^B3-wcim7D^;K7~Uj8EuKZ;X3tlbVD z(=eh%wxAVAWPvDL3Mmg=TPKpMGzTdG=aT&qTw(TFBIg<;`kFOrB)&>#;&>KE1kb>+ z2B2dhdAN+pj}^ZH_t#P}WOC_RDs4ppbD0<}eknMnviR2G%#`AniYwzKw-y(_5*$-_ zmw5S-TNmxQbkR$TmM>p=*`CF(EG{@lszbazB$k;2MYhTooy&w{`02hJ3>+yIKEOe7 z@JMkSHwDW^-jsRwlSM}sEqQs-p1n(#FUOllp3=O)Tup&?1<^)a@`nk7JGz35N>n$} zBOy~(>fI9qX^_jCE*5|=cn@Q((|dZ4jk)4MmOAk+0xA#wuDRF-%lTtBwIA!9Gr9Ct z$c`7mj%LBTedqC%Rm_T=dk5?Lu6Ta&XaF9q!a$AUtk$ z*e$72Su7q{Rad`o)%w|Sbyv5rzAip{{VH|GtUY1tf`Dk1!6*HuN9YH|>@$Gpvq}N6 zCzbi<_XLxmE|LLdr@JCzPlDyUYO2J>kDK?krp5CY@11*7)8aCVVb&~zrEGE2O>>tojkD`+_dDb1*Ao``HQpP(giSRL)4OKuTMcNVOb@(m7M?noGc?geUJ;8t6u0>WYa5RLDJ>(^Zu~>-DTzEbb z=Pw6=C#Q(ao#It|Sa^jEBWtV8YNL5Ce+KO1 zHqBg6?QNQUAP0QbaOG=Lqb?5ZLlZP3JdqXFBbSG?_!QPegco`UzEDBCfy7n?l|5O(2uWh*{9fh*}OFkZGv)4J9g^Su_Z-y zktO~$6KAdO?4HIhm;a)+gVRbF%BNDw_qH-YUp3>pUiriPU-DaPao4J;%WF%Dllm58 z#~3FQnvO5O$UIv}o~Up(EN-l>@f8Ipwl+*yG^2h|U81N>`H9+~R;Nq6WZk+k_l_|; zqH`}-wki9Eekf?yVOxp~wx$i7mS&wyRfA;|YZ$pD0iFQM7=^Of;Mb5{*g%Q+MV}ZZ z4uCY|_@8q>JQ{}h=B5NG!svf6mRKr5#bVli@?ZR%doi+~75m0rb2XFdcTK&}XtK)Y z#n$?!<(KX3?3gc;rSMQ3)+>e{<=;f)h)dXgJA+DdJ5q_(=fbyjlD zyxOq~%LPEFsh*KmXEIW|_M9hDm%Gdrv97&s&LCvUqb)02CoZ4W(b4X%EB2q(#G5YM z&@wJkH_qwtRocyZt7Y4`(pa=cD4!kEPl#4{yum=*q|U{&O2DV&=)yXRws%3})r>`7 zty6tM=kuW2FpR*(!{^GYty*Jp1woSmG%(Qs4H^#!;!Q>OdkH@{*K(vzM1v#qO$_R{ z7+Jto9d&*4xTs#V1lt-9mM`tTxU{8|32n(X!6M-UNsS#R?m__F|Gn3X9 z&{djT%C$c`e{S8Bi4#KMy0LTS?(Vvq%{y6Caq7xk-@t{Re0DV4heM^6gkrEpL-{{% z)|>$4EU3Gq;JmPH{E@zsRX+#@>gc;qk2i2FwVHuCI??#%xdiMweM zWaT78*EG!|+OV634wd0UaR@TenRhksaP%AUUdHC0VcZ2nT> z|Lq#TX5O&2h!GYviFiX{IRHYEViDCLf^Wf)se&K4oOU>MQK$_!7!L(|E5Bx`dn|^Z z8D!P9pUu^~tYLFpB<~24WRqgt9Jadj5ce6JRV}}8O%6hRA!!0JH5LHs91WhgWWLJ- z!KL(|#^$p^amdJ5g8rZ$Ggy6?%`B;J_Kppf<0XMKcmmW9@>-TJn~gIShXI5aI(xEx zlSd-_6cOeEGR2J$MBqWpK*2%7D7_wEFG0(EP;?Sr1EpZsk|pld3%9nq47KjwNtga; z^X`AUY0HzBudMExSE>hYgVxdT>O;3bbp6&zv#t6lVjtU=7OitgFDbdK>r_jozEYb*t7qdj?MRk%pu)4==CR^bNgHOU-j*emraW7T2WR%b?1^<K?p<`lIUQwM$W=cui|bx}?bTOb6E1v3`QcM^BdcQe z=PpkFc*njs2H)6MH*NX+$l&D3bkD1=@_CF6^b#6m7%YZwDoKJobt%*>6l7EZ=V>@G zzzY{zEr!q?#B%Vk9VD%4E~MxbJ)hcn+q^0Z=@qNy9XNJiUX{8Ns(OzNq-fqrsbhbE ziWT!T7SLhKQavnveOJ`2^uK@O;eGSx?>nsSlq%#_#sdo9iphZ#Jwo|{FhMbfSrS>R zQiwFss8KQy?9j`|&<*8j64q^OVgV#e63^ksE_l^9($wb9f`EyHv4&?kqn<@TAOMm< ze1YGL4dcENbcWZd&n7h~Atmwe(#RoslRpeyDguGF}j}$MRo9?SM8!=4Q2wU($EzceOopeaHDv$UhoQfY3;W=e^g5xM87H z;I{8*GeL)G;HH8ITBt8$#)NOPnG>ql&Qh*h zWt>ty34rm;*F33uigBg#?eg{u7R{5>Q`U$R2j3@_Lkx_M{bOC#*zx1XR_*c*B-IGq(GV|B@o{8hJ3p1*lD@AJn%&$i*n1|9(=hKoMs|KsjeFu0HwhG-gj z6NR02xQ2KllvU2l&Q+ddYuKj6LihSj-&!x-tUR@F>EtCIlkybUel`o1t{IyqKm3Y# z^I%x~1FN64cI~X$=bbnBPUd;Rxn=jXhSG-2Z`jT3lX2q?hsL#({W072*)OlJJQjT){R0dcw$MIV@Im_3E)riYBiU=q`Y_6ca&e9uVeb_jW)Y(*6X`BKYM85 z!b8t)Ui*XT*XL>UuiVO9x8B8yUlNM}WBcAqm)&yESfoE>5R7X!w(jnYSbl8TpaivJ~v3;LD^f$vOykiS%0kDp1GRq zVCg_iC;5ATIf&(~gt_DK_8Vo2`%JbUh z9jfe_*S6Eje-d8cyItyiX=UK|B_;1L?UVG9n?6x~K;xR|0vZ5x!At8OJYq-&B}jT5 z#x}{P70vb-p^szS5EvI&o&q#3;_jrm%4X&6S8u*@Sv#ZVm@V<@Hf3s4l;7vm>@w-r|)yZS%w?(I1*QeIrsG=I+5nepzsGxrc~ z!pSc|SCA)uB~*o*q}1leH+COyX<6)cl^Ly@AOH2^A6)<8mq0BH{PW9E7WVFW74(6f z)`kEd2^SPxr15s^#3*QkxXWqEyk{wqj1GtNbEQ|(J1tK6 zUnIYs&2$CihuMv=&x^lu`v>+G339PrtlYp%HorK*>MU~Tjmr477+hGhviLYl@>d-K zU!uTPY~kv}%w^h&xW}uU?TFq&;?(Rl#6glkWN>Gw4B#URl`pWSWHsaPj-^{T?+Rl%;){@`StD{A2dwJ|V96v& z$16bph~Zles|b2KXKVo$Gy2J6qqP8xDY~bRh4}rn$()b-mt@e#Fwd)MdNQq8Y*-I^ zKqOSY68uyOQhX&e!epDI){mhNNM=IwXQLY2+&brLfPWf!2x1u(hS5ey?BxMlyyvL* z=no!g*pcWU2>q^rYg;4Lqki3-zG)X;d+6E=r*#^~7*m$_EGg_eQ=4jA+oZ8YMYWd6 zb?&a!UGBQcmfE7Cu~J)W?WPsCJoTfeZdoCs5nPtKdb}+(w{hma1+}#c_RZX|z*J-U z`YpG79lHe^?%Xkc?nU**&Cy^m+F0WA*VWfFHrCYF`F$mgbgj9#{-U|#cig$|;T=<^ z?0A^d|2~dA8{jc0T&>LodGPkA2Ce<%xn1wIlX?a%!@Eq4Md6Y$Pjh8C)#tL9&B{-Z zDl*AaMfM==qY6ZMs*j2-_o&#DtOvEgKO^o#a!G8V!FLJa99SgR=R+3-1WD>6kPt4T zQEnn&KOhDe*4&&kDJBfJWl@4anq%Se(e27Iv}pbO#r>3wvWJpUt}zNZYx9klkhS?P zCbrI418eh@4+uTT5z<4YR!}Wu!0bb{)|g-CHs~wgPLx_;gZ}Pe*r4aOmyr#+pp0lb zHFY6iYKHu9A$fn1?OWE+XV41w8uJSK1!e3*OLwh>v1U`ou!Z{BA27G z@n6d|J;N3qwe4uQiV3KTDcpf57p!m?0p3so1Ax@X#2IiaA}2>9&SUXL^1&>Xh8#Oo zQ?C?L-8M|oiJLpU6Q{%GGh;&0K{owhQSY%3!h1qcSn>U|R_L;f`cCNUO-efJ#sSbh zkg5Hb9y)Ys=YeAvt+X|EzTjRz37BGClh(UmXfNBmxvV{Ttan9870vRhk`;uSF?`m! zyWBXXtg*^vTY1s31F*aP^xb!Xf`+yrz9*G!3+V51{2PK^bPhMbp(nxq$mtS*2*~V% z(N&JbY2FYBI?V#24?IeNyZFFOpZ~&zB|@M?sbh`bnlV9zkG}tHdLK zx+5aQXm)byO7#8XHFtDn$5~LO*5aqH%?m z$2wT6nTmGDI)?$JimeWHNO7Kra|S#r4ugug1UgoGf)+&L03keV@p1OHE$p^lBA zt*GJGLDNniq=XZ4I+Mb*82pqbfoQ@+p_JGdB0aQaeTB!Lr#Z$97FjWL@MMe@Z^D+s z&IK)jih;Wbb%1MocDc@#$)|IKVWN*g2&aNVGFMmdoaL`cE`T^;1?Tcf@^i>q-czu= zA7p!sX62V=__ATa&S(g9I0rd{)J6Sdr^qB}JA4(U(1Y-`7)a4D)MA`g7I!Mwm6+KC z^C_nUK7sX}(ukntS*u>(uyyY=UeDi#4Mlus`)o8@(xaLmYhKp;LGw3oP&Rni)G|cQ z7Ur#P!U!VO1g(pNoJAP;`R9fA(}??`-wW?AJpaG_{Fi;Nu)eT^;QuU%IRlFc*+_>_ zx`&U5+e^|ih7FuRhmOU(m+aK71UlNUGH`jW!KA(Xf;sb)=69M;|L@O||H&xL zl74Wt!{fDxvzf&5M8E`Lo>IUfK@P&dqXA1j9Ysfw#32a=jPn2f=>Dps?=)zh0y=nF zlN*J67GXr@2Az6He%|WXWJyrTG^F6<|JoS+k`Xm{tCR{6!43_i__z|&s!LT*4`;a3 zwB^UO!_$ZGtWdT77?_S^7Dqv~y|xiDP)-YnK8%pxr7p+Lxp?4~wPvULd zUmZLLn47GQg>WUt!yAzB$G%F{zYS~B=am%aex&q3x^I|U4B;Xp?}AZk z^YIrlk>Jo6{xrIjl;V~Ot%d0#DhpmMHo+{Xi^Rz)*c5L{kRh`PE-|>;1QQ0h^lDfo zd@>|=U5Y91Dt-M)<#*Gl`Fr}3$-Z}Nfx!+IeZ!v7G% ztcDQl>kp+vdVk8V$G)HSg>V(Daj1A4`JRB+&HA5cq3-~n7Y2oBATKb2YG`uA6X8S{ zY?6>Vt(nsVyAxRF6YnNNtUn~CLrIFaIITfuxMVt=e)j}2Or%oj&|p93A5+|pOZ*pd z#pmb`Sv&G65piAWD5e2SoNSIcgY-cWl#06J$28$_X(YT)8umd{pHg7Zo=kQW0->a_ z7yr))>upwE8ZMWr(itk!ke5-mNGO~-u?owjq}8&~H}EaBRQUYJk_kzaMJ-j~1H#0S z1rxw$&lCSsY5*5Eh9p`{{~@y^&(mjM(r6cji;VSvEmZ0dZ}u7v>WxNaH@lu48ujuc z{04p_HtH?AmEG!dXI$pv!-8`CYpz_XJ(2siAQuczyy!!@pi$wT{)yp>!Xhe@`nl`z z1^zAe8p<`=WnrFL1*!@PPZ=huBJ={PS>a{s$9bBsNe$AX5$!cHKZH|luaOs}hA*pi zw$Rj=>@_5!LqS+x4X9Y`l2I@7_L`@81m(I&E!VL96$Z9khIpPCg?Db=MU?BT)g7f3 z1oR}eOn#rEov2`=TqatC@g-cu`;n}|1~nUG-Vnn;qJfhg6hp5T(E`dSLj-kY;GX6Q zi-z9$l?TDudYiv<9p*t?+4_WO=CNA5llp|}o}F1=q4CAqvoxnl z-+26xjr)Osgn&kH{tC8-tSujYAX&ByDk<0rhH0A)eE8>_MbIX>Z9mf=3Xu{d5DSGe z{bXd;!bUBGMEs02AatuZk6h5A3ny8K=vdpjVylr_0=J@48tARLevxvQQ6xQRF2uMT zDdlo6=qryT!$n?JVgWh91v4nu1G=%?-N5?j)BLSd2l{{#%0EAV&&xf1Dr{4qxZQ5= zL(D1c=mH9)qTh-=!wPQK;G!Plb9%5!QL&)AKmk+G}epRD9NQD(&9O0C6ZElh(DA_jLN=MkxobFd(kGnzu)+M~#d1*vxjpI7N&Q;y&0Q(nt9Ov@ z0UAx~93%#q(<@Bk9CzjhzLPRMRY32Y!M4>0SFb)OeWL#Q0u->@`-CeGuA;1us}BAQ zc@mIQK>2shoeQcVJ#!PiaLyd@Kj_ibnQy2+9_9fE%1-skgH%88v00xH6V6~l&y7;< z3z*+Y;rwAP`&tJ>jA`DJcZ`7&@iupQ%b%(G56`bmS<#9BG;0CU_T(luy zt=;C3Nlc<}xz{ z@bcSeLnyAw`PUGAL>*F~12pf(YnG!XZdkkO7$`Hc?ByN%$Z$rECfLDLP%2`Mw2Lkn z%iuczcuO)T(Vwa}C$&16nxS+qnzVRQ5p9I84;?;p=#nva%=pfXYl&x;$;i_ zP|dt~6wqbsm-{)G2ROAL$rK4<&wrWS4F}$7>VLjZ~K@NB#Cl zO&Qzj{Xrj9Q?1IwthH&{H`*sEN1LX>TEL$T9bDBnzAi-V%H>rqOSs{8i9DPnOQEm? zKnSNAa;HMY+M##OP3;`0pT=G%gsg(SQ~>24N?A+(Cl^G2rTi+Y_Xmo`>Wi*@@Y*8% zxO%^0U>2&c=s7QU*VIcq8^q`sm^J3$P#9i9SGJWj|-YQ|Bbro{q^IrwHjL#@aw6r zO5(p)w}zsz_FT2}`msf*s$lq^*3AS90U;2;%8zQ$AmjS~uU@58ERcbWhv?f>K#BeL zYN8qi*%SY*!e{wB?9^3;*7vWVA<6l3`r<8_4JXqkECB$U^#wWOuf$1XFNlXZ{n58dU(CAELUC!&Oi-&kb(YyL&bkw zFG94K{HSTIT!grnt(x7Mt9azgH#FZz%{*?b|DaQ#z(AfKI!4Z}p<~>Ge#1Se1*{80 z*9-3X((C!(%0GrhVCY#e9J%8rDwB&WM#Ib#hh$(WdygIeQucm3{$#|=Kl+eJTk1Z-(L@12&%MZxw-kLv=48+WES(PWIT1Ks z0C<=YX2Yy?Fc%$1$a>sE6N@S(ydbyNTznjed+MRp# zqQd(Tx2JkitUck{ZkFv%h>+T$y361us*p`!x@ITML#@u!?BZJ-!@DqEXFzk1cNoI{ zJl=+S{D?*ZKK1{XW)YK5yzt`pzw`QU#6SP_sM{sCSn6GMftpB-*B5YYd}6E1T{V8s zBM)6)8@_GeJO87$68vfVhG%-%V?Wnl^6Z65%hMOv_5&oUSnJohv?fUse?PIwpgrjj zbkDBTKUc**{+~4@My+3;_M*cli^%=z;`psm^74d} zCj*Zab%E6QT+owC_c5m2HMR6aD{F5vvrm4M^bRUw2oc1;q9jPZaA_vxsFaP~U?%O27@cleW3dOF$d>Vq0Zl}ZBVHjH ztf_?4md<5`q8EHId=*llqXPIzIAX%~1B?b5_S~HV>kar}&i$g+Smv7ZlTat1QzXxJ z$_Fac3X5RMSd@80O63eVgMA|`7viFSV3ZmRpY_8pOoLm0i@%=q@I7J=7Vq5YX9ffA z{>R`WG+DU(#C;6O|HMaLg9l zl)V7Zh_060KjCS9biA=f=azMILnJ&h}h zly@(WRadr83lyzrB*7h*#Kz%c#TEcwRZLH44Gb)Vv~oEAv$QE>6AfHr(F(C#@+ zLJlGHE;Y1|WL2(ysP_V;dWc_?Nl(dVTAaYOpjag5{{*~1y#T?AsgabJdOGqoA-oeB zE0oxN_!V3X&c0eE1?A93*;A)ACcg=udm8GzJ~h))e_kxCET|AT%Htl--e2VXnV<@TsN3YA17M0e6&-Kk=YQOE2LMDBtsJQIke# z@?QDP5g#LZ(1S@bh&gBDacz8F` zRpD-jIg8-ap`Ym@6rNlM3=JFCvr)2b9N_9ODp{J#8`v;h=Es?IOxlxNiKM<#Q9_2M;_jSYUH}t zqe$Y&x^->4;JRt+*3Xu{ylQW~6s%=u)@ z9}!qmL7OlT#T4rTQru(OPi>~6!BlKwMiZNC$FYcG5yvTlmyw#v=M)cWYQ~gfFJVt> zq~`S7oR)6J2?icV&xW6Z&I8CNu=}8Y!-3V5*oU(pJV!{pyvacr8HA5P0nDoEQ%(JY zi_HlS4K2djpeQwr8f|LDf-$pdJEIqbnAcQ(`R2Mwiz8zq+ZHaqq%>Mu7wuYe%n&tL zfGjDLMa5%lx}tTse#w%qZMbXkq~r%<8NgEgk(yfXgz;U~-7DFX3+bnQ@#AqBY=^OF zLbS7X)|dq=R(4l+ji2DHt%>*r30Rp-(iA+JEy;u?keU%+qc(@`QA$BS9Orf!N}fVd zAL_Iua?ljh5MAJ^c}*yLOiMzDF9{(p(30MIi+m$<`Ua+XOL>c2D0t=$9GupiRQ`FA z{BOl%>K)}7|3O^Dzk_}@em{Rc@>6mR)GzU+fJP3!_lP56}Ebt+|2<0=uUVxPy z3)N6@44izF$8~7*yh5H)fjBg#!VE4emB7mt}4}d2r)5g#{ZnU8q)|NhnorPaQnz>S+LontCn2s+La0 zh$jQ|3fkihRKrX7xJMtz8qh?orW`edrfqDgrtxfxOwvIr^UxInxzk2wXb_tKnHl(z^v|lS3R^;C5-qU z@k^Q^e256y0(|hy8uo+8d0&n6hRC-))pyDz3Z=lgVFfaOs{79aG081CD(x1Z!z{a6rfg{`f{nt;>Z~S~76JTgmet|iqonNy9qSRCrj5SG zE*k8okuHXMA1b|YZ0qc>KB6<%`;DPFQ>HnqYN&4EGLuv20mv@Zt>Scu^WHjG$A{{M zn0_!1B4y#@2tE)shK{KGiRKDSUb&Ams?2};;|q5pJXA^P3}#c(A}>+?UHMSdS`A5u zx!-7KdwaT0vc*icx+RrkWvS1Vqu=l9QLeTd`z1pXyttbcEn$YF%gs^<``o$khc~%U z9?(+A$FHjL21BG2Kpc=@FYF5APed6YZ)jh=UwQm-OL4H}p<%olMV739mlk7y|VeJq6h({N-N`F)AkKU*9A zZncuEumPCb0)>TTg$*!DALN=JPBdym6qG@%J)>S~Clne0KH`mlb{f%P!tPP}AjxA# z93;`Q1V$D?)kIu!LsQfhjw9EQ9F=y_B1`piC?(juo)nIC0- zDn9&Z<}dFxHQlKEWj$Lbgq~n;oLYO|eW)MPm|++FFVI|Qe8Ff4uCPwVdtGoTV=nn! z9Mg!5}_H(v@l9y2_n5lmXZ?=E&S(lJU6Imo&ZWZIn@mAKqMS=Au89C=0ru@=+;YS z)498q9ZI9JWB0j$+}686F?+mvy={HRr$^I7WzrL;!!dIDMD^t8ryc8UdcBwRSe?@Q zeCZwRQ~JDm!Eo-)4?J-5xd4^sKe}D^^(*(gg=;zY{*Cfo)5#lh`mXYC@C%ts-TPOr zx4Ya5jAH>O zc|Naas2cQjC5qX ztN*_ zp0iX-C5(oALou489mBshd<ac}LWi(CgsaDL(eO*GXYH2uLp{vr@SV&-2TX_wJ$c zu;DVWH;0OocbL`LWcxFSsKaT)I-4jmq{X-c2t|aJQkL}QXiTVMz=F`J*S(Tc{UO0! zi%CAn@koN|GR(ehQJ(p;)$Op{@wSOMEh&o|_Qx>8!DwP- z`FJ}oaQjgCpV#o@Nx!OH&py^S(Mo<6#&dsVsr*A}PIAih}WFPR&w zCRp$^BQjucQVv0ZvdTb~5Y%*mLkorYIJsDrg^}#t?y#MKoS(VfIorvSE~hJ+Nkv_H z1NyT0bd&Z4`Byk{k++vY9$qbIp;T4E&6tF`tlp*!>j)C5KxYI&p)K>A@*LYD^nxH$ z?vczftYFCQBHl2#E4np$pk;es%l>Foya6Zs>Eu9EYEz!e5Y{R^h4l>CRPYp*(qm5H z=D~}jc&KkX?%Ns_4@L11PWDH)q8*0URaN#UIU9C%a`k~+cScW=kFDx3OHQ<-c(1A| zhLPT?d~EY|Lya>!Q^W8jeqE%Xq@>T#)`R;Q;n0=BC`ofPQDBM+{rFksZ55a(iGAa) zU*eU+_dJAYMzc*kC0`CJJP^FOO9?7Xpo<{uSO7rZNrA__;wfikngXyqdcC>NU}wp6 zrPBc|2Xff6WKjHOlr*OB8%+b_HySNtDX$lf;WU+r55_k%G}>I?y}14c>;mc66GV=~ zB>p6tL*)LIuB-?uX}lCp$PRoG3NBNh#Q-2Qmv!*o*&zk*WvQ}QR7jc9RyUZv;eI1q z1myA@D>js9##>)#Y7`z3u*P$CtoC0yo8w|Q6F271w2yF)%8KD0_2xTV;x+lRX_)S7 zLESy7mmECL$tj(~EAaM1nhN5QP)RT+`Em;B3)pSP8(VtVYgUKyj>BSg0P|KE5JF0S zre930DlR@=+*Q0v=*uq{`_A#ko)-3hEcA%gLXTvULWp5*D*ZywDm-z#xOi1heo6D& zsfhffDTW$dtI)HAE!7yiAVDOsdl1 z^kJ2l>S9UXuCtekeIpWyAb)r;s3gmj-+uKnaX)3%EDkWLFD+A&-j7eww|&#xTfkW^^2cYa9_rm4Q zin3x4(yLf3=0BYT{IwK{%rJaGAcrfB}x_x6~ z?NgR#`|L{eSv%T*Hvmwtyp-4g+;<#Yu-bvpE@#a&$atCK%V}j(r9`g}0;71P)B2$A z^>07GDy&Am=Vx|<@=_YGAKMS!>s6Le->|zU{Oc`LG~#QV)<2JRJPc{DYNOS8_y_LC zl{@TCrW62$lakMd)^-st?P%lI2t z)Hp`>W4-6c4x>S@{PH(^%>AB~t9w+1&30NhSzJq;*3A}|Fx76iJC$XzW&Y(3cE8JR zb!47(SvFgpOI(&s!0&j{;v!y#gh|u^kVZJ9B^rTLKq!cWhf6jz7>B3{VIyUy6St8` zt}7v#!kob_%sj7rhkZ`%r086h2XZFre!9|+So+}e;-=^KDM@y(a^Sx%DRgARg`+6@ zF2u-VGLQ-ZWzz#K(++!YiRJ=~3|GVj`!3)x5$zUkh)3uGfML}Os*EV|5hF(UJ{A{; zN;^ys#azEYS4VvUT}QTW$g@cuN;(_~!om}CfZ=y>M0q>J?!6&0ot>C}-$GouFs%Hh zTmXOk#{D|~3BT@JuRegi$szQ;LUnyKd=u@?UxB<`_Ui-kIc(E;I{yK`ZY?|iTsd&P z-Ds3oUP!mxQvQ9=j3s~$dYyr~$?Q9b+{-|eMivJd_6zn%Diy*g%^dgph0WMnjlyQm zYvbd%&X(IOX1{WrZT72MGXRGk%-(<@szG$F^a0wjK{JzM4tXi@39NXYNK<*-69LR< zHA_JJax@?fIF6fq^$B30HaB2{+{uk~5)kSg_1^k+EuCO#z)8DSy4iVj*ToiH!~Bac z@4lm}>JH~j*Yjl;)*~sL(K7eK*OTEpx-0KkaM|Wbua?%#Xj@*tK(C(|>l{C&ZhWb0 zMo~pu{jBOKI=QucYE5gb!YQVnoLhYCh8f$YkM&BY2iPFc51wjZM;I&Xyq~eb&xB70 zb!DyRW$vzMsVFjQ1?9U8snP5KICcCp+z|F5YaW9djR7^>S60XQbPOU4qinn+8ToxO zNmqH=nTD{Wfv@awt2Of=f=NR|5D_7WgKt``%4VxKRM|4nPih20e86-edqM8Km6$g( zF)F>V8F&FIKjPI0*Fu5JJohBIjc8gc^_8vam+bbN) z^b&a)S?@-wcXYVkV5Z!+PTi!3PaWYx6x{?3=UUM zy8MhLFoOTujq!`V*3tMSxoiS#=D?7Pp0%n(Q89qC3)`8F5QUBrh37*5=v^&^@-+(> z0htu_oq#P)lq8+7G(S15;V0Pkj8^Mm@ObujJiy12bM!;%^Wpm2hU;Hg%d@u!H?ron zhpV7{3eP3fX1D@MX!O<)`U>hiqBVv!FrlFe?i{Tt*v_Hf&)NWd%*!uj=XwWu1V=%m zC=E2Y%d?O9C>(f5K@*3!6y2GKU?CtUfo5X3XhJ~Qjcg?3QbPGiIU@?a)bx-J>E7bj!{QCXu3mQVoR({~yqt$+}u$pqisO>>~0Lk}B@ByTU1@@rY z>u~r$XBHw_V;CUK2l9wfE-|f+u$d`;80<3WWT;92N!SjR2{H~6qAwgjz)%Q~BE5t{ z5sXHIfmk23I8e_Z=spyPNqq^MSm$uq;)aRIt1IR@rrxz|-rh(cR#D{NJiasR3>XYL zQ?c6>sGBu5Y=Z}>%ZU`B67$U8nWmTEokDOZfCCqnPOb^fozyaELUjAIxk6bm033#B zK)9kPDhNB1%fimKXjQzX&F%7()mOHa`eSoz%C&yCm5&2z3k}+W{3v)^aQ~O=ST2;{ zqh1e}hLNfmPB0wKxK4n)$lD{=B-9?QB4!5iAyd1#&(;uI5^TqO<*$<7Dnfn947Tvt zS#<%IyV#^N7y{04=lIS3qKa4`vUlFHyQVtkR$QH&Xo%Y!jyh4ywM6DmD$Evdk4Gmh zpTE=U_G_b+^J4zew#xc4kIUUw6R(Q4Im646I|U(HBwPXSFjgH1mI-sGZI4bs!_5s5 z3VlxJW8l7`)tX5d8S9bLfPC=@;-9uH}`2fVh;~5}+A$u3Um=pMOMiBA#5(f+jB~MSC zn)!Lx?D_0_9r0+`pq+|DG;S}OtTT^^ggZJy6=Tf00YNken;J_z?vjl`&(-CAEmN*Y zCIyenIJNpZr0o0Xx|%6Qw;Ryo*9)=h0Xy!_Sk9T#&@^8c(nn0QS=duDz9H!G1RKVe zc%JC!;BeL*S`*&RKFe1V{`u~DM2I|G-q7&DbY%s5VEO^&mde^;UG{pRiU8kB^nWzuB+3UUR4BQ7)%rO`tFm8O&c}Ju*E2W7p9T9;I7yo!5lX z(M02^IocHA0|sI3XLKxj9>WcSSUt~xtJ8+~5J5C2jfxN-A*?|}r&Io+23KzE5u-v> z$p^6hGe@ZSLfq%|`r@qnoO1>zZdIP&vYv%jtSCiNV75YUt{d0P9x(tvw|d2j+HuYB z@9tg+vR3!~V7#LD=YyVw>~Aj&yNQK8!ugN z9UCp~oxz?gj&*j#ii=|%ov~uJU}aN%okhQriOygttN7OrFRS%-*41?$TfI8-OZKsH zO_fIsv2DtwH7}(~ORJa!MK2%;=)9#Q0e- z_BW5)m|^T*v&rE5TV+7}mC2O(gmsyWM(^LM{K_LvffdF7!z*rZDzod#Dcu7mwar$` z*4sUU=djGz-40u=a6w4CiClcL>lMlWR2F#kgGfL)E^!$C{h|!XpPfWluYi?|c7qNc3!frpzTKbdDdEx|9tNx80$qoyY*K46?85f0sW& z!7aa2ZZbRGWXiX!R!fDr&>YFc1tlDTfX&`!!oS+D8#!ILKE()Z+kfC_7D`;pT=h~J zBhY)eOM-}%pyjLp^|L}=3dbtO3hGJ%;x`FW2IZS?*ETc@zhv(z#m_v*Cd`@z?SI%G zDz$1|ag-7Xu5}ewtF<)b4}(GsDA&ELygY7vMMZRq|I9nAAvVB{pUSXJ24sg9wMM(o zrY%~PNZvB0^154YNvyzv?6VoQqUfS5)sk!s6`k=rvd$y_Iq}U&@DFME5PHT1kJKP} zEE^;b^Tc&c&>7%g!ecN)VEqyZlqJhD3)xb|seD(iW8I2Rd5A4z ze^$P$IK@fI%gP_wWaYhW%I|O^7V&L8tQdZqg7Tj9rt(MS6=qfbuKb7c6ILP~P=2EP zosEO=Vggafln`{`kuTQ?GZ?HQo+QOOT z9l{$Ong7}-Y~1)3dncttGLMU)9@dYzj8x6t-@Ho*98n&*MR;;==JZ~1Z|3qI;fhoD zo;ZPVIc$SdeJ>VhHsNXxx8JS}#q7!uNUUwQid_t{L=-8{Fsd9E_Udc(|1mz31cb(?I^6JaRZ zOzye$B}*=ydBfR%5-yO9@4d2IXr z(+>fwmj~Z*h2;hVYeof&)GC0`+b19}sRuI!+(055HHC{*^C?{$8X}1Po$Hc}qp<{*!Dk8*^uyoeAHZJU8U%?shoMt&Xib zYl<(OwlbyH9~UkQMhyC~<8{XJKyk#ND=F6NBZJPshK^b8abrb?-d)}l>3Pm>xa~G= zd5ie;1B$=2vDk4S7Tj(w853+Y)IY!XJ2L~drKL7goinzKq9^I6`gfQW4iB zl2x2%Fos>-71gXdzIe8N`N3XMNYqZh`AK(2yynh_YGNH8OI>;CFJ22*)VG*q+r7%> z`^<8{Humn%zh7QzyVl^S-u|WnM2=W>gQWLXXqjH?v~2l46QA&xl}Y1RW&YR{?x?Qw zy0NsUFij`?*r{2|!NL28 zsjd^jAOi;(BavJnJkV5@q6Njrx_pnV*!;-$`QZm=?(7`rmYGiaFE&qk+!E>-H~;02 zBJE6QS+!@+L?QH>z_N2MTvjXVl;wk&Q>BefNa&bv=T|ex#<8>^A^`R?a_9izLs%{U zRyz#ZBUff=dwWf5MPreXAx*?dJ(G)?HgsNDz3k3))2?Or<+tCQr@YKpImX9s`YD@k ztXaBwY0)>8)e|o6og%Pt(%Ag!lmACj$e`|sn$To(P86!}giq}j+a3JN9kL(9`Y z{Ef9%UIYG44HLEL>^n)PM^>{TZ54Di;NP@qDndc2gsadLfSJs%0vZVKL>I%adq*nDoUyd%E&iq!a(OQ%d)xUk{) z(OY-yczEWP&E>UgH_q6-y0LLVWXd7s-ICJD&CSscan9_=7?KCFDf{<77Yc>TaU%cy zy(5Q9OUuirR3tkZR`1yN3+b{+bLLELcAB(Dw{0CG+Tm`l`qF8*ueg}y4qyR}!j*y$ z0Mxzk?aWg8)20S@k!zRW%qtMWj59&|43(l zRJX}G;SP2*@$+4~exA6>qSKlWR#hD|Yju{)(cDwjt*ux`iSPOxO`=Czlrud(#EbK_y0L1SShwjawriLP+%D;20XRBpcdlLLkoHhta{ z^Z{xF;tp98FCrCAgdqm6q(YM3jowOiLFwCZj(R6>PGxJRo2b$0UM!pZ&2S<>8&R`n zUrgV^M@nVkc9Q|AcjZ-*&4_qD$p(`w8qDrlhMGW8GnNH=QI#WB9u9gff}qu! zbQZCAL9^FW=p|LAIrKz`K!ZhG)m9I;zuz}q$8H2&*a%a$KunOLo)9!W|Th6I$ zoiwXyoGBg(hea#1+5+~Vw1K&p){Ik|XtHRPZl(uZm)?Z-H6oK4I$TihaQbaUL3@d@ zTvsiRyTI+9eBZ^Df>e81UA(Ofz7Xx*r4?S!lybd@%#`(wOq^QeLacmJF0J$!MEwC9 z1W4TksMIEu*=ouJ(PUsHE^jHTs*r3}vyWK=vfgKd1B`>24GzQqOWS*Z$5EYa!+WM| z@4c_KuXm)KB}*=Hmz!{J;EH=$7dkdzzy@rv=rM+bVv4~K1p*-uz`UjeUW!S8 z03o3UjIAAi_nDP!;gG<4{nzg@J9DO=Iprz$b3a-so`jY9I1>j66mTJ=@l)$fIt8a- zfa8&};F79ws#SG91uJvZ7d3mNzp6COmD?@8dbisIw|K)Gbrxs4M4>B)vAXKw0(-Mu zFK2j#tW2*P9+68698FNSO)Il33nn{_;Vc!KV{kIS-w>VoX*u#mvr4!&8GV8y#^Wl3 zoNyfBTrAIg#z^Iij%YMePQ$|jqGkzq@_DtxX0-zLY~)PsF1^gC@L183@s-?J4nk@) zXxVCm$~IA@FA9egYEEek1ls&&p4I4bq;|DcrEAt26jFy=nx$o>d1Vbz!&7DL0fk*} z_0V+QbIY5}SCuV&u6up1g?L;!`r&}3Di6xhT1ghHCIw(Tse_keCZxa!8>CMEC@gPmB+B{eEN#oA z1IAc_fg+2Kz<3QQEg&oBsg)HQoGB8eXNjW;IHZ6pDjz~C$4PQ#GK{|bx=oh`b&q|v zz1ET?{889VCXFt+_VV?SFlU^%X2a!uS)_n{=YRe%F?-2%{a;~HXGR@9(J^Ypfr8_`djf#7FG;gj{on>7Lh|!^&$cLg14JiQ18@Y;(tRcsrUG z3+;eso*#O7N`aS=bwnIyon$&@w6X#g2swm6!^;6&2#s}x&kI=yAv+`PiDpH|v|Rwd z7_Chj>zYZtg~AX`Lo5c=K`Me|#9587gAgM8 zsU=O3_6aq+x~*BG8%oC%=ahI#O20kOcJY!%vgm{TTjzJST_v1)a*2NQzy{&z26?Mw zYz=Djv%|PD17Ve!3((nH1d+{kg36>_HLwOjNdpL5V*u z=6|HfKUmY*pv6QRmWYl&qh+8mnc_e+Q7Mrs2td3+mLH7y0U=4O)brQ;?-hu4YAon2 zXoRmw@qPYZJ*BY<5Wu$0BdK|9;HDCKwmrUW+v5bdkX$l;yD&#*1abG51&xgbAU1Ux zb!6{$;b3k>%ws31MT>-#o$a9~Y|A_=ctwsQ&Yq%!2ZUWXT|}Yx++VnbQD=kChukQm zE0T><5$KBlSO>8v$U24N;?uB6nt}y+0ebqEicfM>D5AgY)k3dW-V1sV^3vJoNQr&a zBJpEfLz9H)gYk>jT>&+=S#6;qV-(Ai>2UrO#wOI-Lp9YQd+mhm0yu=YN#_hOpOLq$ z?L9sxnRNOI zjpoF3Dd1?Nq=(lT)F)18^w>*EGJDnP%wFMT?A2>doKTD3JjFkScnu?3s3c6sH9D+G z#SsvhI>TaCS~25#c}SF$Da8i`4r2pcKmRPRctm*N(ELB1MmX8lt1(|jrVAGx-$zr- zu6ULhZ_G0o{S&6_I(gly3$lG$*{67$@<;matPy_w=2j3Nu7BpmZ`Qp`-1}}Mwm)r@ zGTGU_k*}<{?&PjgqfZ+{pU&8%Gd}HH`ZdI%3S+VV-*Eir`nb8|5H<~F?$92LJtrl! zJ4>--?h<1JiKIVCi$pIhx$7(s2YNCi$vWLD?SXxuk)pxS>T{t0Bc@1f1{fD%mj=B; z;XosWnIF(9N?{074C0VzbMT{43=jkn=!aQWX%Cn@nvTK|UT%DjHzyls7Ntt(v{h?$ zkDA?f&?g&Ss5(v`==gmmFs|OmcH9TPRnvXPokB}G^#oBq!5}5`!PT!K7QtkCme*%z zAwPG2$`y@jw66f98#n)Tc`w2!NhEV(<}$+DjO3yxop;e=xQ%bQsx2+kN)znAayW6$Ci4qlA^oC@uqVxC@94?~JFB#t zbTC$N#^8$9-OHxg9m?S1`8#T)ET_vMMzxja^>TBWPVXttjkz_9)TmJM3<5VCH5#Md z8h^YiZgy#93B@mf%WUiBbrG+F z4;Z|sM-ba&`ZK+bYeOii|R4-PiVHNXH+FB6*2!InG{fP0yA<503J#ROk-<} z*re(pQVIiHP7%pk8i5N!42ldDFHjEc5*Nj#@f}fyYvLvaXu%m3ow*%!j)9RDtFd{^ zN;wiMdSnK#*86b&UzRKyQ&{-w!X-1HBlZfXcfBwCuU64Z$gcNcD~PmT{W~Eod@OwX z`qnE_2gv01hI~${)k&pSyit&!&+uBMx^ims%5e^pJlBQ?Gf%3w=Wx8!UPH!DER8Bk z%AIm|sIKnbiS8n`&%OTZ{y>XP>+}bPWx4ihTs+9vd|F;LeQr-EaCpYFsV>jMH9gn0 zXl?)4mHFA(eATx3bxo@uUA%&DsRI|cC$G_}(F&OA+WHk5ElBf>RSTFI)7Mwv?s$g! z9u4kp&*n9wdeSRgPGgCy>rnHsxKZk>D3m%u!f{r%SPlz`iRO!^Gz3wo@Q~UKASs|p znM26XjDgaCXie_?gU|l{;N{N*g3kzh(|>vxFm*2e@SoBTkC-2kxccf7e68T> z7tWjYCb2(3hP{!_5k7fy7TMoVKJvaHpnJl8NM(n0kkb%NNVF^!RizS`MlkbYEY>ox zo`BJov6a(xp04vSIK>Ni=>41)8V-i1I?O*>+L5Jnm0y=NY5M$G(?`|l4ai} zb05i_8yY@+(##2C{mY-fWO=68P?#bXkXFdHkh)j>+6ek`gLtm^RV`%%XTz7+D3Oz z8rxE?({WRsGFyGT%E#D7Ztkk}8qs~&YcG}AstY1av4oRYfPwxyTz3>nZWiOKLHqq)>>1s5FqT!cnZjT$io>v){#=BbB;qt1GGS*1GmWAB z&%t19AH`Ow2g1hGk^bj?K|B~zMNog{pv-Ih4;cdn{JA;*EpNa;bUhgw+xPG312QtX zbQ)xGi=-T*fK3#~AfXu(mi224wJiu1$y#_nBhY* z?N1NAx0fjPJxp@yww1qs5r~VnzUy3`LjI(8{dQJmaFo_hZya`>On5()3JPHE%*d3Y z{4VAjBJkF+(2p_2V93OblQHR1l^OFE#d9IPn|^6L{ve`*S1S+xZA@Ndyo$Rrm>bn( zdAC+Ca4mL~b*L&!bTzu>o}2&j&dH(vBX;YbrE=jLQ%~hP2g?8Wq*^x3-eYendnob0 ziHBgAc9G5fXZ*ve+;EJJ~ zrU!<`Y~@l<3P*n1t2Mp}7=}V)`*iTvs6`=Jt#jIt(Fbxm8m|M=kARQ|rmvt0%^yj> zxl-OAVHRI-ODd@`$*MX#s}Qb~Ox*V~NX`Y*J_Dt(3m;`Vur!6dL3z6sh6)Q<^GFj-iI~arAz&Pyw!emlrWp$-_ zp}bNZYnAnfmWI4V*A)qGL~@D{tON0#93{ueQ3{piG=7I=baJ47K*L2e0PUk^v(nN_Hq_^KsVXqabL;TRA*y^fdwtP8U||3%%{Y4=vh##I+~ z>Jq{W3Hi91!VX>HMvtX-Od@aJf_+YFO;;lC=6GfYfL`VD@$}&MZ5C_I_?o<%7u;d* z?jGlQl| zhSFC)I0?YGN!x?8q>fL7>&Q?L2@6Vzz_an0jg2!4pDI-6C@W%YGFFku?(d6L)P@Tm zj>Nq(RG+Q@?h7HSFnTd&t>j9uqcNq`_YX%#E1Fe(MvxfwdXto>Yv)%Qey0j zk+MS&10M;|?h;B^q@2af*$l)Kh9@n~*|<94%MXPs-}ob$_SRd%rzHLvdtW&H&9$p< zC6+(Y6s0Ni9qCCj|PMBy5(bAJooxH476d1n0HDI&v_AL9~=?{dP|bgwBak5^Q=lfjY7T})HDR;6N|8AhHZu`6`CCI7&a z)qZ;IOB1!)=&Y)X4JU9L+Ftk%#5q(#{Ir)LzB<#hLZw+Y8Jtv@0N+XrnmT|LI?BDrrNiJgMIV>QbpV^ul?g6 zS8sh^IPw10qTy4!!kD(tj1x5OH6R%&dL!^bvZ(b0`Z~3*m53liw3!k(9jMw@VogwD zn@H3IxCMnJpo$<*fgcZRqPqtR4puvWt?OVfJUdEYbg*)*dVQVn&pJKgw53IB*Az>Q z!m+aUc)XqbHr`%_wNov#Lt7uNf1VbG%bo9c9%e)~n_b2)z zS*F+3)#>z7X>qaiHCzmBsXI)sS=LqD66%%`SAMuG-X1S0<}JeWvhHw8aj;6~^6Y%! zg`HUrUF8#JMwUzm#~4G$Q(8|MTd)rG6coo((N;y9Ev+Y7O<~bMO{+(&Ct6{&qEI=J zXabW2{5n5fRj6f34-Jpl(5VMf5_?diiGLo~Xm~xJ^KuTa7leYkg8XDY>B{`R2?&O7 z*-hmKNxqNzU5YGE8n~L9mU#1WYqFgDmj~|oQtI%L(xD3xn0z=?h&`(>c`^FbpfQ6l zKqMbK14|KK5aJ(X0}tWj13;BpA_Lbv8qkkmk~6zk_O5hCTzgh@jalI`n_T3w-Snrs zX60=w$e43%>C9nQ-KeEYMhPF8T`u#QbzRGsjV72(-KO&Q*KIPp+@|$T_xjNYUb^pG z13Mj~ZTR31CYuv-sfG-`;y^)vdyJ51#tr zexk0e628upRT7j{d<|gw%BhSYB(<#F5K+H9`;|;8(G;YFn9Dfnt zV8AqTc76Dt(w~#z>&cBTz4THSV@dy=3>O}w1vfEf>}eIiD!HEfxIddYjD5?5t8h#! zbC`Jl1UAb4uG_or$P}Jg9n!z3T`P$1kwmYf6)whn3|Z6D{v^d;Ln4l5#faO%%*MIh zhqHFXb6xJ7xbUxm6=u`@8_gzLV&aBlrHvc!eqdvJ)8oeywHsO6&>Cc#Q{9LyHjpu? zDfBm8Ow>=YBdcae)7!IOHZcpZ8R~xwtK`Iw>sKksKCO_wgt=p@dd{M$C~Rst#Wl%mQ`*2euFzN+Y!(PRk?B*lRc{ckhUVvz~+7*JzTDEd29}5?fTlJ z@I%r0ZRA!qSXo*DLV{5ZZeduDRGF_f9rG!(*|h`+B*M&K3tLv7H@sqDqSl+J*N6Ar zcjWr>82G~Yu*{?OI>J`Jvp%~6Z9=K{wOcinwHC%1pSI~nGv{1t)$45RLakM!1VV^t zvJ7FXL1$%Sdgr6P#i0Oew(E_iyf$Z+o<)#{FX?u~VvI`n25*t;q!8d4Fr4Rl{muf{ zScM|rO-KisF~bsy+VTyRrVgDVKH<*ia#@8^VJerY`o}qQedPree7=eesUIj3j>1Ku zQ^6LR%V=cGN;A+e=?!Dm(qiE1>6J4&t`XzQKY;@+mrO%eB?*8S8EXjIi3lG@8-ag> zT1PUyOoY^do`PyPu*(Cd0QMT30+cUpM-e#YgN0dcPkh5s;qSsx;p5j+(dw=dU4TaTxMo8oD!HI zMyJ&oq@0=*TJ!VWW5ph9nGFq{NkVGd>IfSs$X@gE9m3y!yLiPPh`V?4 z-5ZvTNP3j=usLRTPad;3;u-1E*oO^Ywdo*6GqAV}$Pix4lHHOu7!P!Ca7F1Spvpla z0tMS91Kq8)q@HDMkg0(C^szET?+_Rva0t4-t(@ix!WmI&PEX)iFtD)+AN8mJybq8! zWo3#2)(BQMHd@cr5t}%0a0R`4ybbq_*Dq}wzh?3!A478$3;qO;D{EIera!rS}GJvcS^Py>|TYrTPiKZcyK#3eS&(>4A)q-m!fF zy(9j5n+{LZ;lb982@3=WJ6tv}rlQ`prcllYx1v z{)$s4m`Bp>+*@-Wp8e;!`NxC;rdBw4OL=VTt}6eyQD4=|m2%GQ=i2UTopJSeoiD5; z*Y}^)rVC^mklrKS2kLJD14XwQR2VO?hz~P+_&76f+O z1UD9EkQx{%tJepaAP{f>-C3BDO1@-_TUy4DVsc!kvFX&TP3J^69sAWIy7Fe=B)K z@;)T7(+G|90VGg=rX8Fy`$I0GF`k2|g{5HO{XcE9Khr*buKk?5pSCAFoY?+EyW{`I z>;GTd=ef^w?lzyK2BA|Dx+HxW`k%AxKmTbh^-B*tdmMuXJ0va8f4cJ76T~&zjFYqh z{vQ@nIPiWD?OakUh2v*V6~6wt)d$ZUFogH$XID>ATA~b}40HBDfA+Ng|HH9EE(TeI z0iH?E_3=IMBO?Agve@K>o2wGOR z(3=6+y(7HS|GWsTO9?3vT310r^Z@sVAJP*(%3$j<_LLOtT{`HWrHE%7gPw?~mg+r_ z9jRUd_&&s(0kH>Z)Jix2Tg7}aFfs)LG-*tD$kEtG!c;RF5T_uYsUwqWJ2uo{*}1+( zxMy5v$F>%6K`viKjE@EC8*`h#sBcWSKf3hpqhxsPq)5&BPP*JcW_ONj+15c9T&!l% z$QAqA=yGrR*yvSD_O*{*z2xS?XM|5z6x4cD-II4sIQHvR$3`xyY2Uj7%eH+h=C2;z zzHiB@(d{=cfo(5|n65sINi;ST@)?Ywbk<3jGOvm^W%`!S$Y(-G))Zp$XDlDT`<~t7 z*)OkoHr)Rr?N)3&{OmQUZ*IQ%8+DNhOg!rz&$iI-kjfA8{@#bcMJTGBUj z_iYgVXF>Nf=|__Z(9+4@JW5QLzIU0yyJT(2-G`oP>%96+chjaR4|iqVwRXh%aaGQN zZ-_4__CGJ|KY4hQRx!`dIsPwd0}_psc=!Sa*}EXAng@P(j2M2DLs!h8(kW9DTVg{b zCyPoM>Ipk0>>!&i?7eDHw0&IX{kN|^@9>iw7-jQtvX@-HC3VLw7r#_@xvH&rnM&YV z79vRhcR%)m3D@-hW5u#ta>|xgj><6zPe0Z@U3lQFW%IK-hAGY4AGmkxC3pNb5F;0? zt7s(3PQ0I}Yl)nWGWcJjkOR)3B`9(;K;?O=1Hi~aHCV*|4!%Qq!Ym2W2(tjx1p^O_ z%O(=pN~8r>y>Qi4FQj+un(uPW?`-h-Zs@RdnX^{4&S#H4v}yB04{hG`&~D*hM}!gT zr?;R)*DA-ba+@6&|HK#D*WtGz@tjzwsk8`KFrG#+`- z5LQc-7OHrJ={KbBC}Zi{(|$)$)6f=07#CmzZ!hm%wyamsuk5Or?kFp$S>v#m)^=IV zU2K2GGjgf|bYX8Tqj_c!X9oMHg(OF^ZJinzx&v$*9lLN@M`iJsNIF$**kVT zzjKEKY~!aVNWTE)Sp%zVKJ?@fltBt^XFv?`wV*&*UC@|W(7P7Utcr;!uwM}7prNrQ zS_7aG2}e!PdA&T%4k|+cTm&TvHk_cqHNG5Dy_Id&F~U^zeU(h72rwh_4qaP+UXhRG zo~eppC$ejr2eTG{K)#HpqEE z@fK$SNBuA-QrH+ZL!f0;6VxAV9ySVLAjgqrY5Ml9?1{;YU6Gb3>+eS9g^QHrKFh_1O$xC6bxt*_Sv@CAs7DRfH_Dn#k5n z1@u25ZbBZ&f{t=rd_M^!E6RV3_YxHlOox8-$OQcqXO@^B0ind_8d&nj0plnk%8*0o zbA*&cC~-ziWY#k}QCj$vDdK#V?85RRvI_`p!;Xj}7<5E-7=Yp?*PdCVz&Vc- zBEtFNV#ruyk>moGM6oafY*=FK5rueA$6$E^r8Ev_ury07HK8;l+7k!M0VKfTb!14a z1UJw7JK>_6a$HtEYx|PF90WGN-4pzW@W&f>7X=+M@479-_Nra$2riCo5+1z&PrWu@ zwom1`=-2y6{ydAxll#&+ejw74Wm*wX0Ymg2Yg0Ya3B0 z3wwPz@^EvlI(y1F&LBceBMs4aEuh% z;i*4`b&}7$ntt3ToaYt3@RCBN)l2q!iNTA$XTbj}6%uZxM2i`gX0)#XW`7)Fd z(F7vK2uy{5NYnCC0Q}GH$gCqE92{t+NJ(NsY%e{|ge`00+^x(m(Z+~SCYJ7|b0Byx z=twZQh1fi+NmeZGV@z>OIkYt(hcp_nDAmydiH+U?#veV=C>5X)A{vF2fa)r&NkQ3(-heM@gEEYzonr^c(YK_IBQTJe5D^-}y z3aOTC5#G00lrlYIG%|Xba=OW+l4A|qa@9dd-XTCLuy zCu%j(TXnB%jZPzxO4Wc6z-|u6`rNxN?Ek06=pNtm4DlM`l^5Q1$5)I>snsge|N2U) zDLclr>*WY%)l1V)lD`wBOr?-%$l}x{g|1v9?Fz%iV9^;;I{r3#nAUQ)exEvgl${dFuG0rse z4kn2ce!=PJJ1fz5F2R_DQ4^DxIBX7xGd7vQPxC1g3bv*$TsYXo=848Dv!H!b{R0k+ zOmGOb^8(^VZLl=vpqfEDhItpSjRhnNEuuhe804@&635@D88L=96vkhecM-U11vsLN zKjMa^>m&eO0C%NedfQIcDAmFr)MOToHA_pt<5gN+b*&dc+(gK7AjFs;wbyawo z)%KMgMOu#AE}Gcr-6?5w%-t+p>QR$Q^+_W_;bNrsq=Xsc^va5@P_94{AM@L*g_ANh z;grtUynKa@Va6}LbW_*fl9~K+`NeyXdnQt`imwg+Pg;F)6_T!}(@*rxML`pvv&Wj+TU*o7~HYmz= zLDV=~8vogvUeI#K{*;Ub@iXDs)c!kKgx9)f@eBig0U~9tUVb&hBlenM_*vb*pxW5f zqVyv2k=d!2+t~o3J(=qfrr2(FT4)|&K1;#))9)*MAj5N-$s<4$p6zd$dKml5>Vbv= z1mPK|rrux#`v&PYo2d+_D5wp%5eh+E2);uT`?Hk*Dmcf8dAyRxOLIt4!7l0`!REea znuJf==W%L;pAb%}TG%1H*Zkzuzn~gETe$F6nMuw`IXGZ%UAT}Kh;z}R{W25B;yUX6 zsFN>+k7zp(u|(o{lX?FNDuMozUMkiA6ifKGp`^g|NSPghL!c82rS<&zcg`ZM(=O}C zX&TjDU(_XBJ(cjQ*Od7x>U_WK1@G3`Qe9)#xJ--EuM;~Eg8r__KHX2fQx4+Xf6+T( z2#UiS#8LGM;dVd!3S6pR(npOSqkES^oc;yRO^`yWkDijk@k@IlwwxL72kkOJFoh+M zhr0{U4A2dLH=coC%g=w8ASGD`Op#&@Fq&c*G=Zic(>gOCMl-1taDwzdTk~JXz!Z`P zF*_E?uX*npxn)*rlr?Zf%=N}0{lJ+&1ctHSLr$Jq1FAM0?{lTKg_1t$Uv zBW3hkVWJzD?=tPL64_~||H7|DLBCXPLZ(Zq2vHpf-fn=p^iVp{3vE`t$hs0m5v7o& zB{%^(_s@P=0wIUyj=T%$S&)q7E2qvD{9vt#Y?xrD`Pr#Z%t9=POLj4>7Og_~o+yw^^Ow9b@)&2% zCAb1oXQun;`x9k1QKIet+xJhvb};1^zF8fO9mQB{qrP*5BO-jo4@vvOI%1#Lya7{&d48vLyz?3}H+{eE)=e&kL-c~re%iXYG_KKc~F5+@dTDxx4 zfmJ(iJ9_BBr>bO*rs@Wxuc{=T{GZ$Em}j4}T`GKit24jI5MO@P2jI=T;FY(9J;E2y z^&I%ea1uM*_pf7p`!^F#9nG3IW@7iODUZK7;L{g!&L@zi zI6P=@hVEwI!;n$XpEH^GVA04J!mWR1rU(xT5C86WY$?{h5gzO$dQ4tlUO`5t@8n+k zo$xTxr0--)1N|>q@+|!?1p;g-R!{&-&IM%N`=Kpc`rjeD4!wWzBab{X?R_#2^pjs~ zAx!8H*(KbVn|?3bmVQs8VFI>n2KkAY03`YMC^;O(gVPt`*Fc7ym}!$#6~k1Q%Rttl z*blLyZ6fX-ehw+k&R9aFO?sHP&&!K2(FnC(X1)n_WwL6?mt6Mw-JFg+)rwHwdp^Hl zs``!#XLODr(TDCL_S?zHKmBUMW%Km)>ZZ;_XJLt7cAX>?j-E zUYR?pp|P!NN&UKenErx4th?h=qWs&P7d&1b&0TR@)lElk6+XXRY8Sp-w{w=cP212^ z9&gTR?&@mJxoY*=o#!o1HkMWn%M|ROuPTnk1O9i)y-A~L5-2|>Xdsk@S1GY20KzCs zM5V|hi)A1xGiH^Gxn+5fz#z@MnR(&gq5n*uu>IiEUH5c7ed?>H-R`HmnMSf9Q}6=G zq>5!{Ki%E^G*Ih5ffUwahnt>CuW(Ss6~VgVm|vPs&W=udbu%CQjA{6 ziC_{jfE}X|4TFc?Ps2B;>6ZrM>A+I~7!h5e3>AoY7lYjkIA}ek)?%;RW*oqlo8*6f z7Qy1NWQCt^8(uQM6OinvTjv6uV0M0vRx>|3(rhAt=-%4vkFuO~l-oToughfe1t8UHkOQTpF4kRD`LB6e|+5u(v^{W#I~k}o*RR`YMNxRWGzrXH)680 zL_$$O(C`mR9q5H*5q-i2YcZ@=G>TCM3kHxtwsIED45bvhV?z@}Y=#UVAKEPGUMx#+ z0bB+H<-lRl@(`GGv0KDm;)Db}MLdf(1%R5*1j9h#rol01f@LTSo?UoUxMg9LC$HhU zcMJ{bzl^oIDre5D^qRVYyu50maLdt(2E#koHRP@PRIB~O*L1kDyQpkxSy6Z8;U?cF zTJ5L)#>3T+$iKURM5jC!ODfChttojbXmuSf?XzWrL{5`p*N{$coiWI znoB+ueveq0-+y??B_EO+#IDqQ_|Q*ukhzW0SMCiImsI{LZ-SaJxNFM%hsaHb{1p}M z*-OtCJ_+3W3W)916Y_plS;9;ioiib4^wiGVnv7p5m0uZ~ZtI*X7ESB8t=agcQu(E^ z`L+%w(#WVLre)fq znR7$!ot>e`T_Yrdo%hfB1z%-qT$6QEyc|2p%~>48|#zg`tjqsOT!yIp5+rt=IdBPbKK5`=jJyB z^+%eLTHa^Rlj|-RWkDrEHt255c-whUEDS7^_m$^s+>R19y? z`@uwlI)&{73vrf%Mpr_D<*3|fDWyLOL+SvlRUAD1mB`<6=uLiGtMn> z{$s}8dCR?fs%xq@Y*x2od`NH+X)?Lu>NK^gr8Bbl=(>0Sk@*c;% z$1&4d=hbzWc;ukYlUgD@(!WX%>MFJ4C)TFF99da4dQ^3lb@u!@?9|$>Yc3%#y`Wa+ zW^aDTCXYmY$S&y3A6qFLbyO~Dzq5wR9)G@@vmY39#o@yKr}8H==S>gzr=<5ze&F}f zSWVBQYBB?C9#3_Y2eUUk#R=DL?XyKz=DJY_3EOv;R3MzL6eK4un;VCI7+OfxSnX`R^TYKhc{kv_@ax7yJ|`TKC_x6 zj4anVF&a`>3>K9h)-b-h%{(?C2Q)nS&-jWlNu6AqlxN@96>MHLuEFe6Rhu~^t1Mch z;W@dnEgNPhkU_p}@|&yl);jeSB)6t9VJWW~*)nT%6+gB~Tc##FPnQ32aqe=RIm_aM zk>;jh=5Rp{XP2I5w3>Jru}D7n2c6~NSk%K?ruP)(t~$t> zPm4U^e#ppeB8M#PqjcC4N2|fra^|Ot2@d8!yhP&y3fQPD5u&Ujlv$3VS8P-w4S{=J zEMb~UvU3|7bF*1TY0Qb>% zWIM|$IRmr#?H7?vp15z{{%N}Y!q+E0e13Sx*Tnnvjve2i{ZPBWY4i z_f3B#ykYcc6(*|?3$tuc3O<7u-#s~(jAmyDfwOmiQ#fo9@BaJWX|tndw$E}>%jfn# zdl|F2|E~kjkeL_D#4&-&ANX<^UAB};h69}+?Ew^0s1(s^4nq%wN%7-Sc41nWF^Gts zVNl^pK$!U9zI%li&IgMBGNn#0YkO_={3kCTGv@Lq=g&OUav4oWEdUi5i+Z;%BBpEi zA@VSNauB?CT!iAWZsB>#&2`Oor9*zXf>F+xkJFFhDy@x|BLOzW64K1vTjnfT_wo&y zENw~f7xci0@}qatLFSW4vb2m|l*2(D@}p?7twMiBvKB?~xd+KL=Qs{|3B>N92MLe< zn{TiVJ1}O0U1!^&eVy0B{Pg*)$B zvno3r67>k$Uns6^Fz*OO5H|rCC80KIiY^@LaUv))!AeSh*>m@uvrV%W(KMB$N9bkx zD5!6M*R8j|_xN$CB%O8qY#|HO>EHoO^7!%oUTP*CEFluGIbfTSq+m2orMMsM5rADi zOBpwCm^cPz#)2^Fx5P@bhoBBA&mKl{%%fpCuV$efV?r(EUkyv*5(%b$Hp>mUmWfXNs11uDEuozE5 zR|)R=%UMtGbm+g-bC-kp+AUH8=NYe{FOd@o&!* zdZ-eIIguCrrV_I<@2wrT2i16TGjJlO|I$$s0Hk zS9X1&pi6~V@`QNp-ho>gjl%}-k0;9DRK>dGfXm01hn0@?Gv}Cq2!Qr71d>OhHa?t? z$^c7171WpRQ!j3h z32zLGMu(A{7+M0T{;BGNu_?m`Rgc+}W(}bhhTD+4?g$+nGG90|Q3CmJ&Ndy<=;-yI z_J`>%KMo51+>t-O-ybjIIg#U`j)R@S%OQZ_M>nV2nOU8}_4{Zu!D7fNll;lz^waJL z!$e%n>7U&FAI>7Fv>F6B~0i|3=)Q5JAE;XFJO2j3kToIaVB2zXbyQnZE z(dgOLT@lxoEv`uV|8NSqT%(-NkU2_?p{!#>XH_^{)j0wVg^6eHIu4h_h3V%OeI#Pr zr7Ug~y#w@wsI8ru005!^HVDDenc9payEPyOfNEis&uDY}nKb~coxp5i;Qm2oXFh?d zhEbYsVkG~SUDp2=r8+_aE|C2Wu5o>7>`(X6nE;661-5jO>Fb9lO)N+P6fUum#PQ>_ z&cvlS#-p8zIw0g+*uOEpa8ZH@Dq@615NL3*5Wmv@4Tps#yL)dJst*ghA0`Vo6yDyu z8<^*X?O|c*XXKj5LasWp0LW(?Q@BAqX-BeEcff)W*J&hkBZdB{HiUf^%J4OnQziArTgI@?1AXGOO^WKk$=5m16h z$|*KrKs&Y=66IEQ!R7}y;~)8MQ}^V}n49`Rv!v6aIQ=Sum@x zbQx)ZrIQH1US3j|6^C5*)H#l)X!!;?=F{vJM!j8VCeV@68m(2)vKr%Z~PMQw{(FsuMxco}qr z6XO~q*v4c;U0kpq(+|PoDc%-gxSk_bi#8@K;ac=yl3AHC zbIpcH%!HsTcbZNaG^T&|eAKM$(8)p1YAuYBIR_i1CWGx=il3r+YN#J4C4RfJ8R3GE zTPyG#@%2P0j}8n}+8g?x%CHF5rMwOZ3>Zr3;Ew}dNIm&9DO@_mOW-db@*hGToZM3Q zzg0ZqK~hUc{{ZAHK|>N!ry&5c67f8&4fx~5-~J@q*Po=L1(!V4=l4apw@-;!RW6yr zsW}pj>v z0P9qg`B6D%j_ummwQ)Yvv3cv}5v*~Ka^&Y9e?C&VM{-)FzVwqD#vj}~yNWUFRst|Z zQe@3`*5l$4TiD%~%0*$``2fDD3jo`oj339Rs}& zqnj86MGcdHK2dc}96-?60JOsp1xRZYN+7H>us~3+yNF1KQ2K?@I#CGZIU+olVECxx zl*P^}g2s@7k8HbW-fx!9joVcOF~y^9EExUXvMai~XB(NZL?yfhEdD2azK59**j%(| z8M|)W8ll#$I&9A(4;Rg& zWJgx1I#GI+zzPovY&Z;g1cdlyTv$vCWGV%9p(#j{a^MSKz^9@jG#Qz-6rmLq_(DY+ z*oVSU;n>mytVpHjwqn_%mut(AAd6L>+*+kd3g0rwj;XuN;9NEQlHU+MeAoQDm>Y(T zUcV1S%|(%#=!6!lt$oSXo0%(%^NI_=u}k_=4c6~|9ej<~-2{8`39&iJu|#r`oeGfD zC)NOmpcyq)XrJ7&+9NQ`mh>iOtKPM0`rP5Rkj0zjS6v+-Yi2KOb_6U|KXJ(SmZuN( zSlijBPl*@f#kOfbQ#UkPA{WsHNoe|$FcQoIK6{;HpX4#gA0!`1en8$k2kI25u*f82 zExZEX8WogD&H?2x!Wh9*kBoapaD*8d)D>*%G+HVc0BSD?XGS#>56Yrgi`z;QtOdN1 z)x=U7Ehz<<2=-^hVU)&8L!#+Ntnd(Gs5q)1id*FaYXMsziXoN`vKW4gOX5^-w-(zh zR*TF{VDJt~k*pVxGflx7H{UzVDI>k00ROHuummRZcA9Ua;~ zeg1M=R4RJC;z3-7z5-k^i2)08g6@mbJC&Zj3$9|N*TqgeBz+a}y64{XM<)#I9DE>I zAc#gM`sHX|Zd{A9yTdXD6I+zl6L7tQvUWzm=4PaBocH9VW5!&1Wd4n*ZPRDmzG>=| z&6}r8owjwx^lhmd=O3Z_o}70hGe>5Su^x_>N_iw&;^ho75rGs%`~z?(OHNs>CZpAA zG?6=N_!e@B74nVAc+wWK*+Q34%p?qIqRkzkN_rNGP9A{|J4>ha*>zs8-|O*v@A7yI zPMT=Mt$VOgYjfDlY7oYF3pIA1!>n=mJ^rn7jmA_|wzX%kH&n%=z z%%6uN`rl$%q#@FnbsCLOiOf|<{fb)9@Ocrt!)UTk%<^Sc93cnY_Fyl43f!LFoq}$$ zjxBCH_Sx-b{Uswpp%L_dbCcd2tBaZK0V%^Nbt=2oZuZkvgVtt1)Q8Mk>&nh{)t2mx z`Ld!WtIn^^isJl^Am`?AqTa3{_K00=*IzMssda<9uV`M^YR<07Hlscmu}0`ah|feh zzVY?218?%t(4j!&i^zC6Oo$TH+0zg%(?`aEVO^jzBK!e()Wr$i7y zsX{nL7IJJ2jE`r!6y`EfL>lZ>qAwYpj`of??RBC<2AoK0hKE2nC@+M?O!TG%29Nl_ ze^M$UujuXK|K>F$l_3wJ&T8Eu>6b~9x&DW-vq#OC(Vk!9ZD=6L?1abSvUu!)?8>~F zP(fI3a$AdRIeD$6Nn#CW7uVMpA6va*#p=h%C8HN~)K#3q|Y|^eR zR~AK>-_x5el#>a^j|=xGD!MD$D}{%y)Q>DI6CS#V37t|`j2v0PeTyX($KekcnBy4a zXx2gxbpvG;fi^k{zOR=hf58aOgZMK99L!80X-dI$MF(SyYhhd5Rz`>4l5pmSWPbQk z#4ZQpvS8E_j0R<(@--Ps0aG$-Iav2mhR`6tErHW4fGLXuWDxnO2S+DNj5cwshxnhs z0PK%@nexFxL(qb|M>8WdoqNSC*%=*I+<|e@Z$ay#|7Btf5-y0AMkfl9!IQ31!a-2} z0FZ#O7{^k?wCJJ}%iwij#X_Vn6!#52CiD=JX}~xQqCVOqrX%XZx0ZVeFim3P#y+Ik zIJ*yF zd2w=HzqN6C<@D{2OB^jLdoEZwzLU8@WpLZ0_H4zb(PNPXgd5%U%K5^(Z@qQHb=UE) zW!lyfN5b*8X_=YvAg!IvmdqZna8x+{8hGT8_ zR)wlYT{m^zcIU;85nC>*m*wbuptyB~JX6m*f7Wt#!s7JBqec}c%12)CR*ipH%u`Fg z_S8fc7Ybj!hCekmL!_C)(|& zY%zr*;3?1dTV@fR7nUb%`@L~RP-j)jW&$wgNw36RD{xolfbbR3rB_ahCl0_=c zav)S9Zttv)n}qpNrRf4WY*^?0h450PKeo87y2Wl*EA(K&Qz-ZC)+=~s`F3upT%#mQ zD+W%{to-*=h#u*r?j>54(1Y}eCSnR&aXTA%|3_0XwXqD0=St`-CBPd^#5lefabH(R z_Gac`OsG`)<%4uFFz*gXoRA!W1u)5q~4m((-dPA8D<{IR3#ij*}=vm()!ss_8(ruR9F%d*4&kGb~_jH*ie$LHKKHPc(_WG2bX zg!DF<1V}Oo5K1V45Qx;!JA__D7&;0lMG!$SE24;s;@U-w?%I`AS6p>1aaUd4RoB;D zT}U#Q@8`LbgrK29ZNvq?a;IcW*mv@~9S511Xthz~oXu+4 zFp$p6jrK_U*x$o~PTU5sSQT_gXMIY>}9Qzx0p<#K&)cJ){SPDfezTqimnj+mM zoIrj5vx-x_$>tH3^EgE9TtV_2qTGct357-r#1Pucf4|Q>5Y{|Ec>yy-9(-saeD)}0 z8Bs~-6G@Mg%&;Iprx4jMu;>ZX)N?!1%3AVNTIn}h6~74f%t=)pEme~m=`I$iHV#i` zq4eR#Y8Eh9nzSf8E zj^v9#kVD9>L69yyLSoSxFyj&NKv#yS+-1|_e$EF)ST}g->eAPxubJu9l)71?N=z$E zn+EMX{n(BDcWRU?mD-M;?kDg9|A~(ZJGY=dgGd_TKV* zUPiS_qv11u$&00@AEE)04PyFH2U23766Kg{;f_L%E%x4as~g|yh#;nrk2f{(%4+j6%Dy|XN}UTnw*;`7TrGS zSEo1sY0KE{J}9a*;tFI4;8uxo?!?{=Re3;q|Dekg{?pTlY3T(#LG8@;Epi?|IX@p% zFekW+^VgKkziUdLo=e?B&MKi5{E%@x+ejxll`_ zMX5L={cGaKvvJ{DTKQVQ9VuQ7$k)opW`8oNEhJyt5-pEX0!=l^7|k+;RCMXup#~(+ ze}@8odR%~fk&*mPIih+_w)F6pDXZ5#GJ#vyr{hWgwmK$A-~Zv-vrBuc`j?a&dl}*? z;Y6=gOsuYGi0rs_{1fZLqq%;??LQ2i?-+Pq`sc(uURxm+_*1-96Z@o5ASBU-XuD*0 zqv^>A)#y4jq`|Erc$GR5B3Y^1$XP1oGqi2BlMiMTI~I}lG&5gyha?&Beq;pe{EJF7 z^3;KzciE=+(;b!Kq9VK2m*~n&jZJqrlG18(vTM^^cBel!HPe;os~s0TnIi9GcV3g7 zQ=69LaHP{UKfOghiw6ScgYqIo|6oLER}3l%)L0W!60N>*+|TZW$*7Z<5S!pIn5=Q} ziAiyBQ0O>tAW=RlZ?RBI^lV~$^z4r=jE_rjw7}fcB89qsO}uGXT}>bTzwzKT&}8-|qV_y-mZug_yK4wtYYKG8WOznTvzQ06iXEq-ZAZAM>rvNOBSoNAMK z;hpe4&d?=fi_`LG7!Tv|MsD$s5!}%%dUe-;eI-tCjt$oDv($L1l=b*`f z!p#u-YLC+XVAoV3&lE1;ME`^*77zY4H7#8uaQSJ)P&-&B`n8?`g|%xr)0F8+=>-X_ zuFsTeXQ_X{h;ZGEN9Xdw#8V5NoM_Ya%~*2H(t~%-Zd#V3PIdH33ziJcn0Ih?PcJX_ z>HSq&y*H85>$tRBqcLq@u{O!Jv{q$mY)DcY6MMyry{mWU?w`4GP=3?n)7kt-7cWeR zT~Isd)bcqe=B>0(?mfP=zdvCI_gPPmFuC8$HeSMxO@>uKaYg3cG*aw)DD@3&xaG_O zSO>5;Ih+Z-1ki3w2zUCiMpwM-6)UY;kZ&H+3MA0?N@wCOolH=NOn$fU&=qfF zQm1=tmnZC=D+(jie{%7_G(gdpv9NX%Di?+a7(3R9J?r<+1$76lu_$2+EXp3CZ1tx)>pbH-6&lgQC%tBZt*^OlOamX;Y zWXAQaWCe$f`PcOy$y*AKjp@eEc!Gti-R;R|qzh;E{Jp;7W)|K&YyWSV`b@0U;Vd%f zpwXVZaq}4_KNnA$a(~5CDKq}g4-mMz1ew1cgH;}GnMJ-tsR?eY@*FASACOl^GAv3p z)OTPGhS|T%o@^zU9|GcnCIeqgcEQIkh>iz7kCYgr%N2~)sfa>?<&(n2oK{DteOQQE zgp&q|sm_kM&Qx)b=yM4^m+vo$wn*5Pm}uj|Hg+EwgChzo!f~@Sr;&MX3`;nznd4-- z9`;`@hJ~F;Nlq#3%E{ptrY9z*Cq~9cj)wy^HGyz+$&GJX#9kP_qHo_7!=>Ic<#}N{ z=9CMV7jg(&fMRse73eEM8ut^!Puqk7C5I7!c+09$2U5b6Bl{G-KMu&==nDGixVjJ7 zqAcWfu5e1f56GVLkBvRH8B7Eo4-3X zn=LI!+hpGKf%Ln(e~{))dz#K}#y-nG@jcr=?Mzw$_vh-u!s@~?V@4OGrWM?D;sNRH z(_P!M9{3-&Iklj^{%+}aA8umW_X^VFJ(mCBCh3Rw3Mj5Z2dAy?F&EOeO+f!&E@O)G zP76RCQ{-6b98?WXVFgZDR8y3^oSd4BS2V9+H)_&C+AxYnLDP_;!X*R?a08@WnT5vO zW5;3O%OLcOW+gOA5GDk9;-QDCE(Z#eY8Gk>hqD}E!MK_yCvlF(mEXtlPb^t}+*c~? zbn)Jln2c2E_1n#EW8c*^c~;wqS({S~PPg7yT9srgJQ~;M;*mceJ_tFWM0$CtHzp>t z|Ja66NhVdS$tWcDFLQ^k@$$m;8nuTTSv=|L(?xDNE{gY}D{g z&mnd^r&qu75#E8LZZ8|*GfXu7O||NbI8LSFw@j6;fiY?F z2dN$3r`@$P-Vi(7T{|^YEFI}pvFFZ{_b@IqZ>S|dpc7pwMTu4*wpguciSdruob3aW zm%3sA*mRCl83KcE8=2w>#mqLxqCYtpEHH$f} zmJ15bbo7xgUV83trX)|T#|MT!`n#9P)G-#WqCzn0)qP)l^NknF)CPm- zaaRI~K-2dH{?#`0aQX+n0EDa&d_fZM%4Cm6$h#2WAuM{pnsx5bNQZxz*@h;g;ocb< zf?PFVkvezyRynt1bCdL~ya9pzjcuQ9Vc{*GZjbWB8&(yNE(EHunOyNqplaRr#`ZTFw{LG0@*1~uk1nC7&_ZepR2CIg z2HG5s&*|9b-Rl*H0+p2kX{O!&a7HC}dl7mPn1}vkIOnbpgHPq) z_et;X`;rBvGtwaG4E!@^At~n zEV=|`@*uL>(@EDb5rVqO%i--v*E5Nz$i2JTf^$q9v)s8}k)8Jas(RwQBa zL)qqWdhtwn3HVj1K^~gJpw+{Q#X?9pP6zLS;|aVUR1PSwaFf#RShtxrSr8iY{ z+BKZlZx&UBfS=0c&}(>~U&94>YpRv0Dvbj7G8fw$*(j;_MMmhfbW?expq7IJfog@zuC+)hx%PnE!D8%j+SHi zCzR!FO#dCn-@9R$$ZfDE3({>GjSZ^@)M{sn#b&d4V%0Hhgph30XxMZy*@kPNXAxMM zkN&PLUPCJY^rqB#3u?!J}DhkzR1Qur{-A8OD~z)M=Qnt zBjzCG)$1W?cOom6?h%Z*`m|DHtEyP#T^~MuTFnPwo;T@FGrdlF`3UR%)kkXS!jPA_ znAT4+fp_{WD>UwsKK(F@ZExq$5O%Z|`~(FlAIYVD_*nY9<9g{cmhk64SF<_Dh+#wv z+%^i5DD_nt|DQ1L6tYpZTMLPA-95e?g^z9G0JiYhrjCDZdQ5oZ!BCErm=mhZ<{LIW z!)CTsZ9aQ;bK1k~9>Oq}Y&rd+^kx(2&2_L)P-gF5=;4BbM<=1+NaQ!C9SE7sqVPs{ zL_&%yR=~g6!6P}Pl(N$HI%|Am6q`PApmc5I`9%}Uo48`>*iz)on3iskK9E8yXYs## z_SCk+3)qm??6sBR+|^Q&^z1cb-(XW-zoBy6;>feowS&g7ja={czHB;YTQOnQDybZa z?`;K@qn)p_nuP~9KhQ}Vkmu`PvhOcZa&prI(?LH_aceO=)r$+=3{xGkEAnxk1YKuw z5aG#mNX`!BEOx499Nx6Xdf-6o z^Y^Zuv--htuiSUvcfsG^eDI?Oo0qJ8bNQRc?|Vg9)vhibfAh`bON9&T=gw`vtF)4j z4BxeDcn6=El{$ZZ3co|R<#1I;U17n@d0?W6k3NpMdA!U;Qv?=djbG9`|Kj;5j|%$I z6KO@JEig2G;Id7$x#WfPsmnHlwy}_K{A%0c_OI@0PrK`@b#t`8T0C=jHp_T=f5$$< zw)>8AAKG0mdnA<}03atUBVW^!-A_xYPTrm?Zy&(&uDiba>aJzaBYbZ0ulhaq*L@xP zt4ch71kLrM4a#L%LI7>2JZ*${lLQ13%GH*QZ0`Yh?Un(xdjS0ThQWWg9x*8sL7iv8 zk983um{!7@bv>-C*8^vCk77TtFpewEV?>bZhg^^~P?_2(dd>OcAD~5@J${susOJx^ z0=V<%e{{ak9{iaroB=wEK>wfo5CbDqf0{5D!p)1Zfhi-k+n)|5qiALTI2{Ial%%{? zDmpGi)Z%SzFLC?1V{I>uL^`ABzY60VV={g&c|F@WVvcdnD*RS=t~)B1FxygQU&?IQ zxV+u|xOXYi3|@Ks+u=*Qp6m5Swr_a+@eLavdrW%I-?x8Xf76tBKDpoIq+m&Euy#bS zSGqlAuo2vNn#N^_cf=$G10JZQc1x$&s7n55$5iQkG5zJ2rFWJty}8H#n^JN;hLoHX z`sqD6DJeOg+(|hpIrN*Di;(s=(|+_%x^KkND-SIlk#@y1@%+@sHbzU!u1o8s0V1|N zzpx@h>&QyZ$yG5O@(u&TtT!|AI$p^k&lb)1Jo?^JjK5uwbxiORzfy(;hx?P@JUQB^ zSY|XP-`;xkXe%!rZN2^WR@PdPec|2gii&LZKvszRE|kR{$gW`9>D*Deuxas8p``6h zRz*dY*q@fa`W2RVBk`f>pkMD{Jr2|hxoTyBC`To83q)1Oqd_b{yfC)Fh_5RWNLu;1Ip0#Av!Ma1gdE@r!@79a%M76=*cZT%+ z`YoSqV+rS0ojT%QLgJtGOF{1dM|zxT+S z!3nE2Z&@`V_}HySo~$VolB{+^Y@lKOvUj$=&P-!>+g+-XuAkmG;=TH&U%;jH|SFgI`+P`8dF_u3_ zmvq3r+u`L-zZO-SnBt5&0YNaQ<9+;H)y0*Tc&Uy*Fwymos|=p&j!Syv;3=-ezC2iIM8-Uz6ITRz89wPj@`WoqSFDhFiqO zNv%>FyM~2fsp|+?dRsa|Ca4F(7LO42@QTPR?$(YDUI+tnGTiYO?pAq&g=b0%ORl*? zVY3MebFPI0egUGPVf*iMJ}6_?z`$wF4R@e)UBp_M*)Lt zRET+5@AxupZ;)ZJXV-q ztVTvqFvKiI`9`p?vLQeN6&?@an2e3(YA871UDHi(_#kw^keTR5XFzTV>ws<~y6aFC zs$4u5YHXy22sbhX$7#n@Pf;bRrc{psUJCx{@Sl$n^*Xpe>(g?qTD>ktr`K9@()3OX zKsm%1o-Tny?;U$rcN|!~SCf=8GBEBP2lw1t<^gH$EZ6+L^Ici)v;pR~o>L{fGpgd6 z3=<*>LKGqu3UdVlr?zsO70@jf4UaT+9(BChrb5Q>xYQINB%~stUX03ygB}68Dow|+ z)i>O*x@^hy3#Y_?5DLY>U!*jne0PSoyxg0yyF8<`Bz@$FPdw|JZ=!h=S}?dc2vdH6a#b?oX$O#h8f&HB~XrkD{U1~xAACR|bs=vIRd9U6P>BO#gY z58pa1D~VGqt^de{7#d$}#AB;oVojJqCx5+k)9#yIx$ySV2c6OjsWyvwUv3r@@M0Kh z@hf%i?4Prq**;XI`?Pt{iv#D?e!4Ni-=!H($X*C~n^2JC2xq&TuEaS@kc0qp&V3aL z@$W_2_bf_wCqtqm#XB_jSE}2i{D%U5D6QaeN6<{@fp3DFd{LoMgJ%%T3I;*tf{B9< z%D@_EHCU)f%)8R#gfvmalyIH1q!_;T_3x#&?_a;RYT2rR@mYeH9N)XKG#$}Mc~dt& z^Y$|vr{?j@m|oi0J3d(yvf>A>T2>{6k=i~Asesn22{0(d8|7SA6*J0`lgnmQLW||r33e72nPH0u+Vy8msqDTzhd(siII)*BiaTYC zPq0gQhxdGNA#-pjEiE)S^8)d39CYSku|tlnfi_5?A_rwcm4{z)RF?=7N0+wFoWr0n z#TOPVX=E$HPY6rzz1K>5Kj;#n4vcOd_{WAA-HuPToMaiNpsGw zuP%>XO*gG$>*U9@g)i5INQtb=5W<*u%c8M!fCW{k;P(BqO&IXO!Uk75P#n+?kPY+} znUbiKU4`b$_nbzf$|Y%(UmM+gPkQh4p5qk=bRA$2G&aD{t;`tGu~6mJR&yZe}0Uc-oX;o4ax2Tw8+abbF_%jM^aDALO~F3YgTeIm?5y ztG$5&f%g7|`cW5wJ_SSo0cgHJSEU36MbCGAjdfS6-~NAWj4?6yt1CWeP+Zz-utc_9 zu9k>?g|CC9#jy3#(U-4YL3ASX;n!HE(@<57%s1_gJ-?Rxt>oC!d4wMF-_(u19n_fJ zki(rLq>G3}hm8}ot`n)a*nMRqh`-zj_{i&uW@zHId0M8K19!R*Rh)1KEQT#}$8??; zS9+A~J^Ej^5_N-@j|LWLnL10Ipk3O8w(jw9=1uB6F|B0Xx}UTn>3%>nloDdrOQ6%Q zfpw8AGY$^v-hbNfJwHQ4sE1(IbRgZj381okfy|I#x&%#Ozz@R1;2~~;*A#U*q)V1! zHvHp&{Q0AF20ZYU{ps5~OngYql?4Y6o0%Cn7l2S#qp&EFnli(eFl|BddSqWdUG*}>I!WtblG7ZD5 z*mK~)0x1tD_<<0k;w)!g7_u;>D1bnWc0+SP67|ai)Wwun^t7QBj%4Y($KH~T^;`bN zzFM{BhCgjv@yBcA{?p^jOMOxv-76nNfa@La<9|o^qvJd?yc+m$8yb>tK?C9dLJ0yN z3XMHS+Goj0cdo~T4&@KJzk&mBTz5^A9munB|didgX&N!xjvh~Tmr(W(Hl?rr0 z#ABp&84c;7g;OPu{(fnxX9;mO2tr)($uRlxCZsU@3Pz#f(WQYp2Mg@h_d- z5O~*^BunpREq9l8bay=|bT?rj$b5=yck2U*;mSEP3Xw!o9SyA>vuE(K$K=n>qvv;O zG&vwbJBMF6pANq-di=ig|9)P5XQwtE576uyapn9v{J!Y%`_9Yl`qO!qyClf-Y^j{j z(E&_n4uEYi>spF~fo=vRAj`U4j-Oplp_jV_7xi&5apCuv|CIF3$t|Dk&=F;6rf=Fj zAzFx6ATYiXttSX&Wr}{b;}fFyyll0;9DUG) z<8p1!2O3B+4nHpc52T1?xdBm7slTo!l0*sbC$W@`k7LD>=Jn zR@DNa$-fV{r);hE3F&?Ljhlb2jLi3hR-28B+e4SD#38E~9uYn9L@PB#E9Rk7ETg-9 zq6eRdzNO>qpUkWBw;}ydl!xr%&uGF#9FU9aDy+;d%0EQ33|ICfEi?&G3jgOz) zFf3H!-6tWkNHn#6Iu zan!s8s1C{3m)4-|wnCmLC&Us3j8`Z&SSBhYsuPT+BXfXN0P`zX2s0c0fKuG;5Qpha z6?9m-V90Q*NQPcZG5=cpJtAi|EzB+5GIjURL5v?5o2ZOcS&eFS!2mI(f63$+t+8qS zmnWuAKk=o6)v6KS9R*ou&R15gdPVy3*590zCU2j=>J_e_K_hBCnf^d|_THv>W7XsP zIe5L@wq0c(tW~K8hXQ#jX+-Bkuv-7>@h^wX7H85!q;t}judJH1mF<7%_qXE79fJ}Bf5jy^ZiQZ)3N zf*V!`W-OmRxnH`u4FAlHLn+A&^}(>}Uvm8l6@+fsRX^&92osReGUO%dP$3U71PV}E zK2nFt7z-+qT)&cW?d6I(+;kdn#ps=v>-oqZ_r%4s4?iVNgF>p60twx_14*) zS5){A8*<2IO-xFR_jcDe^6}3<}_O5Q|AsXT#4L(ySAtzr_v_aV|D}gwKbR9VGwm9aK+asZPABUsxY{yvv z*J0a1XAgvK{{-7%G%)5goRn>$4%y2EfqWhnG{kUY4|x2ZKq2YKk=!s87HDhxu{Erpq?rG%QXz#}!Yv&wJgpc&)_4V`D|!!o+vs~}u1Q7x z3It-3!PCf}ssgGOkmR&NOJ@Qk8czc8{p}B*H<=vmtqzmv{KM_w%f6M9IN`~l^-pc- z2yc8`e8rfaZhS?2d?O#;@>E-koU@6&K`>AB4~=@oyXCR{bMNm;z(nuw&T{&*W%*My zXK5$`tDL;aLXnoADONPqD|?QL73sM{Wdvt&=?2iD75M%XV^5ejXdVzyP=2Sxr zmm~<|+vg#1=a<@Cr?AYHXuPE0XLTH9TCTeNPjSim5BSgcj%NmPYdB+~Qu+>BCX@^9 zj4?@gT!>QWiLVatyB}eyBa76PNb17LsP|i}V)P}Y`cC8?j>akHD*D5+-ocd20`FNb z=zL!`kd0)MfJ3>G{hB?;-h%-~;^0sy5>gteU7(sk7V~H(X1`Avl($KA@+qU&V6MeA z49F>+;5z>3tP31eh+3+04!T|kcxOlSiGtTaX^#<)0C+XHW<-~Oe^XeP{jLG0a&Ev<36z*n$Lg|I&(VWrEFU=#2jo9Du>`K zPD67Pl>^7bF27lcdgCSPR3-95qs&S`(a;eR_#J#PAq)CY8md-tkP0H-1+ItU*OaPM zl*uUol^Z+qJ*oBrFI7ubjNFg-Lw)2&i2z%tRw0jG6rX*h_F3Wr92=E@N)@Sm);PE} z)g?F_rTVcc*+aJFrRTOS(T|C4=5Q~wUa1Kw#lE6Mv1tS{2)9oA$J&HN*R2@IeW$jn z*!Xa9UV|etGV)vJ*nD8>a-vnOj58#tG`hqjm)@C}8gH@bRDlNMPc;tbQhbS`KF7dw z+Fn|t(b=DsFHUsZ)utiN-hjA4TIq!Ryn^&Kxn(o=TyM)L@|4E_3o9_SZ+#jQRltg2 zd~fGq3uem1MSTax0`@#Z1NB6fUQG0*a3c&FbxcD*t70}wd}^Z8;E7MrY1N5(r}VvM zluJlRw7G|;#_9XH^detUXdL1)Wa#V;lk4JH*C>t0nwXHD)L$Q$>NOSy1}7Av)Wao1g6+*LehE>mffHY95VQTk2|n3lIWL8;WGY?Th0dX*Y2 zfO!`OJjZ)CGv{6RG5cW;fM(29#`uy#XzEp3PN`AFAh)blm|H5uxJ*E4{BoSPM+ zHfwq(v60A);qSG&K}_9PTsTJW6n^vk)ZPA*v!lclu+oy%I!*|-_fsiC!Mb!F&{ zHvkdSEW{d+%*JTUFldrFQ_O3>et~Ng8&+lb2AFy6n8MpNJPzM$;`U9!_$vbdV#askxc zE05z3*EuZ7I<3Z$l%&xbY=$ItOd>v+aWJPH5b$M|d(2*KoJB-t0-&4dlN{rDYnk;&aHqm8Q^A7;_Xu9{>B&)C@V@q$n z+h7RIFd4OM=~}-3*8J)2xFm~UO}chRvZ42u45iUDz0zE{c9DR#yk;Kn_wBM;RBGF% zz8tsd__F24k1t;)`Opy)R$x%+_(A=i6dD@P?6%RPL?ic7pOtZHrNwk}61UN*-}OQ; z|G8WBcEC3g#*m7Q%fOIS>+?l5fSvFVrm>l=I>4=&ODi<$9KAj%4b2kSY%mR6p^FL3 zD-P6hT;C5WN*0$DZJ&a~2>|Z0I(2$oUB8sq?e=~7sScjEC-x1q+~O*qhYcHw{u67n z2*~4bc2b|6#q$C&x|P)?Lq3X+#Ms0$^wR(+8T_u1Jf@M)`wGtt=0dx|E+Y_0Qk9E2 zSf%Bt#D6w!pE6~8Wa*Ucjg8wQ<4WgkyZ$%OF0#^hcl`dADcO9+!1-&3JuxF`^2Ek! zU(AR@(&-b@2Om7WacTelp4?2j3AfWy%~kQ;w?-pW2>WmrWpjbCMTx*ZM`xxYLUg1Ur*5EYYXMjx z*hMhU7YgJ>1BFdU5+?v!RS;S9D9Vy2YcEkCZ~N_4aG@i^O%lDU)fB1;r1my1A$`FTbMMpuU(@|ICPy?%-!#(6 z#)+FYO^j~sJ$J6-MtDsSCreATEc!@i>=Yn-Wh)bSH3qzip5CZ1@C9UUibU=%**EsQ&7?sWlHESQ&cHTK}bD|V2`6XBwv)BmjjjHN(+u4VlkgFk?L^BcmCtpha?@Ph| zN8bkm(j`&27P_QFyd4Zvst2wI(Nviv^g@+{P&H!qg#~i@kBu*DZLz20@^sHgFInSb zV$#!NViGLuYozv&(r~y2r`d0DPBdqTtr=#~s-Sl$cyRLYaaAz4oq)B>HV>9=ztRJ@ zQ8#cT0)^%xdD~fxGki#DfsP^+3Q6BKA8`-Dt!SZ zlERb=IC__W^PT_Na0hZdU`aV2Xe)vi!w3s=G|K1(R7y*2s8OH|NrH{)hzj9NKshYn zNzt=bSJn-ohn+QKJ!=U~q!$u)S5+x{FtSqo8;WiXm#IGH7MHTSl6!L+tTlg^5C3-L2$kF}sK336IXvY@)pY|Z7h)zmTIz7~DRZw~%IeSUEh@9z^rajEAGZs8vFbeUdjnShe=^c$F zgGS*XWJ#C*c%VT}X;~B1Za-x!cjPOV~^4 ziH{>)dxxUy)l6|giz|-s=n%}EUcxuyTq7<*CU+`Y30_Sfvl9 zt8Pzrs~BLRUkOnJuoaQp$%zjXqzG&S6Ixl3^jh!1eVU9& zuH{)=q*70Pa;jQY*c5~O^vd+w#$}DQ=}O_o;sGMB?w1p+;vshr=8LbuA0iz}SjM^~ ztb=&Orj}C=FhH${=v%+Jm=XiYNEry&a0^ThBfXyf z>(lt(D>9@PdsBK&`VLQcZ{_XGaO8+IbjSC1HQph;^W?qKA5YG>=PO=$MRnvpr|9O@ zz*~wxnuUKHnMR)Xm*;62(=Td603V?YTlMWwmRj{fNN){Ks%n?H0RgN7#$4CAW|>i- zgN<}q=V4*k<%=h=@@84zN)N+h=vpM%rar1rhp{4G)&M+K>JcRdT?}dI&}1rfuTK4M zO4N(S1AiY16^@#t%Q2&ogR-n57P|CnQHu+7!N7=yGFTvx8bUhhKA>y??NnR@ncx-d z5ko~f*GNoHTZ_#4G^SS=Bs*=gzuBj*ooZ))qn$`aRc>xouCROJjr%t5yK!RmlIgPr z%TS9jd-{^3L(nA5DD>NJhJV3nZuM9q7E;Ww@L>NER{D*cy?}8$CSa#syv>m zWrKA)-+c5*mB*uc^3gYU>aKdUr;allIwu7Kx`4yd9o?G z(6uLqk#lCz+_};ssr_=5Atmm?h}gr#%f}*plh!}<-R8~TJ+wYalh>dA`$nR_MEft7onoo}H(#f-?1*zj(cxMDOJ4*+@NU;S2t! z-{9Os4|N!Jy_}Kp@~$iU)4=~_iBqraPfC@Cut5Hc&UF1e?##UF(XIaTO8lfF74F$n zNImL`?_h*=dobwXk4Q=o4#_!czsI0fAd?iX zC@_o9#dnddy+pL-V29`iXdqPPkfAXtkqjNQ(vmKLWf+%`TXy%RpThV+J86L%RRp#X zoy1s_v=%@m47R+Ohj8Q$<>ge#i&R$ZM_w6-#oGB=`DlUPpux$?0#QA>vb3tt?34ue z^qu+z%BI>#c=UYfwV}JF=|ts@$wfJXgfPG%Cg$}+WMrM|K3cctrb_SnD@g2(>y^eH zPV4mp9d=)rUa97)a>8p0hlwm)kW!qlx@r0kg{9Ka*xcHt<)c~p;F+z{cCpDD?E`46 zQTr&Aji3|xKw?*rVpx`wv5tfKmYRtghgt^B0+~aO5+U)l>&ou7K>Qf;Z17Q*%uo0d zB%Y8upW`Ps9>@to48Lba+qh(Q0B`SI1KdIXk1j!&HcNvu^WAxIYa>je34d`$pGf@^`4QTY`tL|f8FiIz;0siMG!tc|X;FCr^q9f6u`FK39z5-I2W zGH22JQG;1sW-(L*uWe7Gb}ua&kmHkH3Gd1eh_2-Wd|KE7&54_8=N>Ts{lMJF^oAYw zdMEedz#)d9C#On#NLyQQNr8>cdUd?r>nI3mnhinTd_i3kNUt)y6hfHK+!rb`XLcy8 z^|}FB+--rHb)J0b-JJ63oHyR6&QgyIWDGKcVs`dDSsqN2@$t};Fbq3+!ZPOVW>)AU z&<8;!Bt^NC!dKgaF-b;YxeH>%$|KqdyGQ3{v9P{uVH($WMN_SW zgf7ybA|KT@-LsP2nGqQ^eV@9rsaDxCG4dOKsG|}AS0=NzFqsc^v|w93D4Pq9PcIQe zTHtjKsG5YaoNv;zvREXjU>Ma(MM-|gKW=|XIsywr?dhAEYTYaE32&P=VwStM>0%3; zc4R%TFY?8^Q*&&|J~vV`8nSwqq#KPbN#03S?s%W-s6Hp*d0Bxak4f3rumBjWpjkdY z1wG3Pvd0klNdQw!YdN5n?}Q{le7-W3C-3xBOn=d_YwfX#218sw#xg>hWYVVsUPC;L zT~RuS+c3n7eC*X>tF1Hi;xg6RiRMjX>o(fzX4y8@U9-h7VU_AyZP1aIk{>tcKxu&_ z_OH+Pm1*u=zeiK%%M0_L7<+4As{|gLom7>o3zR zi$B0uTvAM~VS7povmNZi1lPpv+WPskMoM?G`$o=MI#zqb#Mo3xp~^J5bh?}8lsEaL z&4tQvo-Z4-1J|>d>|>L@GHebsbv*~h!tpRocdm`z9s2pG!KNv1xM5b z8oA!V5#hu0KHvt}$EvnXdT-eRX?JL3lnl9*@3`Xn+9jA>v4Ji5SG9x^M0-XT5z#LuC5g1AjLkm|MFk(F{VBU>~sj zNl(x)WMHtM7PP7A0f*NfuhwtYR^{MuvnJGDslG5Xv*HC%rJB%7hN^VvZ4G(oz5%=`mjy18Z9Idcz;ACk402(i>I z4i2WdjvcPZXQOQKIaS+Crc6ts^bu{Rxmcsc2CVE^j@ZbG0gH0Jf^olQMKv5~pdTHCG*8;MB7-JsBf`?)9kAvn&##OnR=MDl*tWXA0yo6sz zxLzq($%%cS5Cm`)MIjJG5yNCn9)|oi@Y;FDqTdFuoj>TUKy``JTLr@~rqSxR##mU+ z(`x%Fo90Y5v&3xEYc<2MzR{-nK&$2T!iO5$F1>|sU9Puuye;3HWzjD;SghKP3cXHi zj^Tz%V-bvbZ{(pEvsP>1pN%nFBNt*5RH+&SeVM6Bs8A=4r3R7By`ymm1QHHes~AO< z>*D80ff5Y@0gVSzLUbN5mp?Ck`=jScHSi*T_}d$A{FV*vGNbgYcQ$B^oau_eN)K(2--ihb z97gvLas)}S<?ck0Bl{6I@z&V}9WabcIzcen5?o&E(5a0>yaP-o zozbKY=#9K7D=;ei=HEWY$KXMuRq-4eO8EtXMw zfzu-|kQD_dY{c!Ib_BR|)x7X?AA6;)T(sC!Qj7 zsa4e?x@Dgdg+_3y{2CV2@cy7v1Lsi{<64Q>MH;#06ODr;H*0-X`j~6xnj?+aXRVU^ zS>|b!!dxpUR_TO%868fhi#ji(+dgSzVd~?uyejLB$dAPj(up@Y;fv!8`ZZ$E9|U48 zBKxoGy4>r?L-1uoOQZB9bEc17FZJfL*b7o`WC3vED050*rjO-^UZs+cB1+BK@C+`Y z8^gGzioJka{|AqI29Lvy4S>-5X{RJz^#{<`rJ-%Cuq#BfYz_dD(|83cLe7F+y|T-y z3aoeHTMLSz&_nmc7Uc_&4XzGcBX1!(oSixC(c9@>)F*#KD=7 zHjq3zAes}YPlIBKd_p{O@^fwn9BG1ZTMr5wgTsTt;T`_P&5QA0*s!>E#FE9$9RrRn zU3Tow&yNWkk1bnz3_BekOaJrCb#Jd-`}TFu@b^j*;tZtaZ{Iq8?EZ7yNa;IdK}AXh zwoYK{v&uCK4@nmeZ~3A&ca*N)UHj#h!_tLA3pM3gY{7nZ+n-w54O~L>^+Ar_UOb83 zxp*;?%g`df_!#^A*s;%#N$G4IGp;?~c7Cm(TeNWep|_VWee>WXcs}DWJ_BAW2!-nl zZ+Y@I>B6l|(@L&&toBY@d@EDm_T()%K7DZ$`pir?;2pv|tHHN`zp%m$?`kX%k|mP? za?XKA5aldafi0F1k>M001GOU0F?k*3AmthPA-Mqa2NFUKM0{UqyYvIo0=Y*k9e8}x zrpGt2EWMyl&-O2UX)x2dTrtUGlKZ_ReV;rAo5@T!=+!0u>~vhBP0I^;L|fIMrqc0u zd3~NxUK+O?8K%$RNk5!=Yp{8H>LsxT)FJ6+G)LqtOZ3HoNIFBE%H1< zE>)G1l4M~<#V(e}-Nh0A%b9#`gygz^qCUQT;^v7HH?u-*TAyUCZ|%kv2?@!4(zK5B zeswn$-k9%jXdGpZXO;}ZQsZzuQ?zSzzx07;rGK71i-bUHdP1GTa}Q6N82P~#E5@l~ z)6*=LI5F0i-6tzxD7rDP^8rhTMjv^$$Pmct1FyB1v-C9fMMr4mJ@>5STd>5JC4N4v zd|V8}kB@x#WC2n}V+4RVq(DeDmpO8cjPEH6-O8lOaoazWo_*j!>DkY>PY7|(=BBcn zy#w+g`#&u`otl$BAdT(!h~e>-k&6#XEuU}O_BjhZ$f-gT+TZmMz+(OYkMs&F_6*1` zOp(@-PKTi^2SEd7QJ)hLSp-uBq8Jf;kqSgGkKF()Jq0qWLG6j&77*=G2QIi}`H(?8 z007oP90IAg7V`$`rVB^@7QAHOV%aRdD$i%jwCy6oil9oBb} ze8)J}x1ZfJ-@ULRw*O=nI=|0azQl80|Cx$CVHnsap1sD{j`GNNo>|;u`H@Ro;BfLR zZ+oR+=@`+cF5nV-r}pXCJ-v(_&hWEO0|U4MmdoYjRR6vIJNtwAoGMMpSUy)?AXR&i z`k24y%QwKElgkozwTEh=e638QwXo?d0av@X2gM`F6Cuv5T=3ddXbL1vfNQWy)_;)S zaEhN2%n^+v+9k_NMpAGD36>WUQ!WNyki6b8bAuJ8)F;pYK-_|KZ*x>&V467c@aW0R zT*1ijk9gwZeJKUt4JK)pZ{0DOmyW4cZQePFyJ0q;7$@la4Eb=A34DW+nFbAc@qQL- z)nkxwi;pG`(CWngh6S7_LD0w9Y{ObN8#z6$GY+hH?E!y`&b#Q=a{6N zN8J7J$o|GToYy7jlhXN`Pc|C?BY@Wq>UZvb<}k%5tuZl8hg`T$tkN$i(da`pA8m}` zs0#W)f018~Vq7i|x8W*NmP|8P=iKU0q!2m|Bg>lChtE}2b2oi1{gdr) z(9Mua+D@NtJFQf3Yqoyl*WA6Aow)seX?|qRO*bb=WuA*{{Rd1JJRm(IeHf|RV&E2S zVihZtxZ`vijVr`aLXY&aY)x=0fC&o08i-!Ri_;i_M<`J^mD8_;F|eF$2Z*Z2Jm`0^ za##n^uh3smc0plva0Vvu+oaE=0rPuXst?Z6>6Yj-zFt003L;_x`E0@@3UE#g1_BKN z3@gEV19lb(NCgH!a~fL3Ky>B&G;EOG`26wb4ohFnthq)IuBn;HY=@sazFK3F>&GE^%L86W$bF3xPI@#`Ky@v z=5JX4(~lBw%2sw7qdEnX#WQ9wEY`kV~?+5Xugcq6Z@qbhxwP>8nsJQe{Xm)*G&5Y`~qv!8k{px_ii!V$W zv-FlVkL65d7r1xDcW>JL2X1Uh-rnaYj=ue$Tk4iE)zap^_psSNj6iw|3!BWA#|NiY zEj#%rd$4Y5b?!ZjwzaPvGqG;aM_XU#hTM4eEUFlte^g=2KSn~={;@|`)T(LkG6r^Q z-2&K>XD6IdDXjX7FhGLpz)T4!HNj&O+cm!dqG2$kVCnb!N%+1RecHlxQ|9S@w z!AmJbmtlch`4-uNN#$~2Ui>S{PuE^nRjIJHCD|x;D#;HY0mTb$(2I zRYL!>$Bw-;+}A6lkI^}E^WD=QpthBB*NCfSeMzyd0#g)Kb%*h^E`_6ao)Q-wDGEGr|*4vly)8^c~?~OP2_AX8|njjPUbhCF48aR92 zz|g|YjSp=dyldx+FYOG(a%$xNwI|!n`~sJ&<2*}Wo3mie>UU~KX6Gbpbh>!GMm2Xv z_~tDe5-cEn`i=M8dGLCja&dVmRMFJ5ch;ChwK|dU;|8pqIkmW?B#06Vyw%H%l1r>D zs}fC|(V)^+R+*A4VpXNtl`v$*!Z{;rCrqdvHQS>~Fq;ym^=Eb5_QqM~_U?Pbq$?;? z^Stt=Su?5!)(&crru7@V^})$6?Ap0AkisGTxmt7@xf4d`LMbU@v^8f!?Z`Pz>opP&nU^)=EmtwLTRWs^_e8tTs}dcNkG3}MjAG6F#<;oAT~La7Py=kUbw~=dogF= zk6>!R?E_ZLz-MrnDde~Z!t4Vql z(daPh%QxKm@rsq-JbZk5ids-=^wuK!!%a9$=mQrZ8XzaOWm@MM6teH${P-|f8 zfd8*@Zb8mkX>)?tXVCvSeYn-CGx%0+-@R#ec}c@{t9DK+u&0bw+WQvuwMg%0jazqm z=JY$JRK`UbtE&c&b{YE2UQpRrsZ6q(f+PFomycgQv6sdOggjw+{)1!E-!je1uj^&d zTC;C;s5Cr)iK5A3InI=)RK>7+lB)_bbh=jWFq=*1=rcB5nOAqy_|ZEj4(^qx;nr8W z1DwM(YB>C537(sJ|+!H_AXVCJJHXb@sXt6LfNtIPb%1p9ZbU)Irl#?Mx z6N7^g60wY~F2QKoMIj?SwuNvT94%UjcDBk_^w<;?LyIo^uQU?*ZR}h|ku{=TsXeya zEEIakg?{`b`Jq>|j}bB{wGnx+b(%M2>kDQA2FIme#QyBz*VA45C}v@_Y0*|f7>*$= zR5LDw+)xS;RRvgDcQf#c%i9djOjl{OaM4iKjGLnuM&1$>EkCKVL9YMst2Y#hK$!m( zoqfU&&PDDM-pe3s6vurzlAe&!NEAngqW`mY7)ufOXU;@p%%6Tb8g<^af98y)!~Nei z%`FJbzslp}fPZ?t)cXIey=;)9(t#QRtXO#U6KE2eiW*2>{NFW@=#&)5IwQ44Tjm26 zZL0Rh|E^iMzLEl<%kF4<<7x6^BfbBN#voZb%JU|5(h(B=z^!zyFhzHF|wFm&D|vAM^8g7eqt!jo!d*7tt6EN z-tEP>_@g{Wc`42!s)FjSkf)nCf*;0M=v3cdrlwF~Q-3HVmtN(YTJ5gH^tKlHy`gAS zsvkvRi7q0ERk?*Y~*0% zpw?hDW0%7&H=CR7Zja?c?Tt{jw?xRvssDZBeh77ebca8FZsFLHv6-T-Z;WVtM*qlOdHA`-l z8Y|YS627=%xBY}#$tf&Wy;=z*9jg+|dRxe*hJw+Gx!tBlWB&9Ae@UUWwt-3K88$@l z?DXA99&$q-qR15^_;PZH?bHExWmM@}L!&KAM(an#~5!gihJ+=mfgm_V7GDdeYo}Vf0lzJb?@D4xxYjU z@EV=bA$knn_`JM+{&A6;PBH(z_folKI^Lt)IW%|u7{OHN)Hags1bP`TPe2O?)G}D+ zG{E~oAnmFU>8S(0Vjm>)auK>PctA4L%f+r*voEFD(vdfB+Bh~LHs|2AnWY2DUSreV ze3Ol&3Rl;>AhqRJipE%h7ZFq&!>RJ@y<%OuBad7*8F7#FsByIREWG2Z>ziI3QqVYl zWW{`+QoZ9VX8B6maSDy0exRR04LT#31S8l&b--DYGbsHUraZ9m>-%QRxbJKEJ8A@l z_%HN8CA`%2M5Td2ZDw&uBY`ys@e3woc}d$qF7-!FOYib4Bd1xqaFn*W5z>2f6fMaV zqb{{5?-xUI9J-Q0;m`YcXv$Q65-5Vj4yT3Mkv4JAB07}!Yo)W&uRptSYF5Lbddq@g zu_tnFtDn5gndJyp7S5WX)~_iItzvcUeA`#j6lo+=HM1(F96Hs0OZp9J&4wM)Cu1)D z>R0tU;@R~&HGSi#9#sK(kte@m~gm za=r8h-AnyCs(S`w0bj8C&ii4faRyjLFq+#4(I0o)6VD>%5N2!S9TzNsgO0FD|(zW^%wCkPf)x*s0X2LHS!YHx9LF z^@CZk5O{!84i_Ay3wHFG=NN? zx=)vNGr92N8wqO<*?OV|8N`ptMi`KD@@4SChU^rfpX;9%s z71kh+VDS{59tlUCd@6#4pa+BZfimy?A>Z%XcVTz^o);Hx`f}(W7D~6j@+;~6x7V$E zoB4iqo-LL_+#}0iDF5csE=&2NNOp1jy4(GY+uhkQ+Uy?|t-4|Ng}n=3+*7}L{&n}X ztb1E}AJhYnc!#T&nj;b{_Fd+6>H9CGWz7shBqizS+ivhFt@wt7)zXPa5cDv=8KD?v zAUZQ~U*ymPer($#j|;ck_C>y86Qr1qd)Rb<>TbNH%?lmlQg=RALW16?A z>@=F7uPMaEvi%gq(q2&P;&AWfd+;noWBots-UB?2>gpTcduL{QlXkVMu2oz0w%T14 z+p?PFZp*z}bycit6*r0n#x`K8u^pO?3B83-LJh<~0)&JTLJK6s7*a?=38`Rf{Qb_% z$d(Psn|$x{J^$x#YiI7OB27?qt;@uqGejpF5p{d=MAqr#Fzo z?`}uB*XQ%5JEEZL?tI;0b69aK116lB$mtxvY7i#=08co^1YX{Nz5*jdCAX%rRGdvp z$_5ZJ9SV*l=%tNup#*+LI{2$tXbJOxvjwhIS(SbYm>+mlx+V*J3=vB-(VAW(+9w|| z8chc0iQ6*^olz;?6kk*`c#p~sP(EUhZuV8?7ba#!yS$0{1+ntAo=aDf(9X(BJzcQ{ z`H5avbXH!P-Crlb$6gpEfKsaKCXEZ|9-~wio z|G~t^U@y+by1(J@gz)|^FfLh;NvOoRL<>d-!fV7;1n-cHT)?{~f>;W$p;hfptB&!) zW!m0_jAsBV>Tp`&1wT^D=FIXdEUFCWsVHJQDO7;IuRdgO8ggQ-)|5oEciZdd>^c_i zZS>?+=`)SFx(+{>avNN3Q#-#hVig#l`5EGo!7+>Cr7r zx67O3b;aAFdwZj8@$psB?2#!=F$G1jiGsNzdFHHheztAz*2D$g>U_`K{cr3aSa8LQ zpWSucN1n$%lArrs+>=}Hzbe%hH9fwI@viu)3|ssa^>XYBX}0L9_*~A0}Nt$Vj3PmAMLZh(kbpaUoX5thz%5kMGrcDrx!qhctbY6 z(sNm%sAzoQoDjym1aGoY`sMi#Z{Pm#`5zD8kh=HdzQ@jKh3R5bV!@IPi}MqV-o)Ol z?BN5^1>yDUW+ysEuIS9kS+nbfZChTvV6{IvFPtC6^{)6}Mq#4cu`)BWzAe}6uRnjq zyz|!0E>3fqxoy?xl#t9>$Kv>c ze1D)I&1NWDJ#@+X1y}88sR%CK&|O+MJ1@y>j`oLFgq<$NsupC%`oqOjlHw}D)nyIg z**Gj9_*Lm9RexP~_UQrff-tKUDQ3)aMdwRVN~dkWk!W~!r@6y$WoJH(ou%5%nu!rK znJJ`&*-3f5>giV1Kc7U)sq!{BZ-O@cDQ$S2uZlSf!3knc5BWI3_KCPoM4}P;IpdiZ zovG8#4zcX7_U`>keg{|fDYZwL`zohO2})--{P=hFeswC>0+pZj_0K>XPt&jD(eP_M z2|S>x^P}g)>d7UrBmb_izScjd$4rw)`d7VEruN1uV2DjsWa2fC zo2fUS1e1YS4TPa4!Z&^Jfewg4(^-ze{=Ep4(rnVR13VEPpHOxn3x6cW0XDr*2#QD% zv!#+^9@iDl zG7dXPu9QXM)47l51nHU?#}4CL@dw=s_1^4*Oh*phrN>Kgna9sxcTvQ3+3Gt~dG$M1 zU*?Kjw9Yc401;##{f>ee0`=hdhQg^+3;6*APaNeCsXiQ^F6O|Lc3fID!ssNqS?Q|N z;TXi{i0Skqho_0}%I)m&l>?M$V5K~h-I!la;c~!#DsaiKK_>{XGY=10=>i>o!Q}={ zoXC`0sz97`f{OH0A%YTxkK{TXqWO%|Goe%wa-|TJApE*ot`_8S1I%SsvoeR-ES5|0 z^5csPu}7U|ldwQW=mQ*9A@pOqAtjqxO<^S^o4LpkcT|0UDn#X&h#iHa^M4+VJ*l(W z?MGwf$FRIPS^2~r4@YB}`i{+_ck+u9cdM1=fT-)iIM z!+raO%l7X((ZXJ10sMb${GjgSI*2O#02$aI5avIvOfCMLT<4ft#7SVdK5`vi^JT9sjd@DX z1^Jy`Hp)hO!8Lec{3Cqh#JZvKk#eA4q&vkq(l|;wr(Ut<=OXSGota=O$`oWRYHx7J z(KT;g*EoLo6X$)PS|q%{cKoQz2MDx@KIJ~%tiAaurJE-x$>+%_69x>AxTC)si}%O7 zqb1y))S}S=l1?}|Q$H>}j+t(TyrLIAzu*rBQfOta90(K^Y%gGpN+|5@5@Ju> z2%{ho_6px8KQjLL^K#&MV?Zj77;unrqY$e+8ilG8Ccep*7sG-lO!_tBH}ZDx_)ht! zF?qJ}OND>n$*aJH%5OW0IYFl`=p}3f(wU+|o&~b2EI?NGa2Sl;1GrNl-_n$wS_b+G z{YBiiXf}5EurQ-*&+adq*~)+JyFkuXY#WTVt&+zd+xAMOYo4p}m2Hp7}X9wAD z*}>2Gk)z{ptj*x8X>N043uEUUJ@Vvj9orAS-@THtmEG?j+}?59ljKkyD-Xem>C|{m z?6X|p{^w~r-_VmF&t|kQJ@o_j%Y#dK0}+^5dp$%Pu(DJMf0I^XLV8>{0na#J$oH^i zB$hkgEM!@YK6%&cugkl9Myu5*zGK9e?QwYn-}5V6jxDb`o?W$kd6oE1)pEXZY)p4@ z`*xYEAL!KZiCZbhN!>m7U``s3XQK>p{ec4q+^4gVB}rP3v1tVCr_icIqS^Fck0W(R z>p-lM&P^$XvqFhy`K*WsCqN$qznC!e#D%f0@;$GmWvnu1WmQF1hVo5fe&fjSHFK|n z`;buL{GZB;=WSdvrLu5t7N*fNEcEfEi<2e0&Bp4wV>q7m`cq2^QT^T@Y-KK&jJ_E8hqf+-`xG-=A}!$aLSm( zW8tO)AENO-@f~DMgX~Up;_C{TLGFaS`WRyYGzDav02P<@7c0tk2^;+7stiST=o7TYoY!Yg|)iz zteU9K-fgeQADva9T>K3?DWYNOfxn4YM14F9{fkv+VjtzA$!W+^IbgV#0qpgVQBjQj zQU5zwCS+TQ1>lCLr?RU6PXPf?J<_@LQocAXM=#`82KLjuC9IEC*Iw#de7dc_8s3lvS;ec{O=7#* zyU)0B`#U#Y64`b2D{C(uN?`dbZcdhJS0=sbHAKt5i7BcJ{NBy(>Y`%4dV1QPk-cB- z`~JQ?EBmf~8DB+v#tC|#By?9}UYt76RtaeaqX3X(QxCh9BW{=rQ0!We3<>QBNr+bw zGT}Zr!%F79DyU`B`gV%G6$UjI#fQnVQu4Gszc0zFM8zbOrX+>(R|Lzml1fcZi?P=% z8n%6S!F!*|CqB8SqvM`Wn5f*@)n^mMjVMelmK_T;Rwly*OH0f`2Q>_W(x z182D4#S{OPeRTp!_b77?n?ynJQO@YNfow2h>XGCRq&U+3S#TW-$e{;6^N?szh<#^l z?b@+5?6RqKcKK?^ga`)9Hgxbl@2#{Z~h(BIaQ@v(Qb0~}L2nm_eWFh50i1D(2-ou2Ik>+r4 zP4D=#%w>Pa?vj61W{#Hs7UQz?d>oL8{9drd-uF=@@(9aD<7bgqhz|1aZ}c?%Al^aV7m)?$YO znIZ|y9TJxFV*w_{4J-k|OBgJBV2?q_pQKR1v#0lvy94afhMB~|=)bZ$xPY^WNra4` zd%)P!dq9mN3Jf46296b!2yD1fjuM4!xPf=agR(HfUS@`OeQcUdZuXT-1Yxv{UPSU5c?MK6^2{UzlI(?P>t4ri5w{D*da|pTIgmV@wv|=fNseH+=qH22wy9jj(oy zGjj&*C}o7y)eK~X^M%nSo580U-lTB&S10Df|I({Ot)Ko&`oJuS(KCRud2;~jd5^gHdM4ME6yqmwv?$}RH#jwV~F>Z zEY%c4CLZYy1CLh{Y3Ff0IEsqUfJ=5Nq~51D;1RWJa=4IZFpgt4Hj37@l~L zRbg{0f|YdO- z{><*kjyi0ydw#YrYX8=hg#klKL(w@`WltBS;_Rh!3q!-58S%mcr&7eH7bL~0X+&d2 z+2mBw|E4NtPh{y-7q8~9i9I(|o@z|VN()`6-MJFWqSND}QleP0uw zr(p6IGH_?e#SZD+VHtG5>pV!cfas$M0=uWUUG&&RUF35FK}>%5Bgx3hPRl6u9@s!I zeA5RGe^N?%M$o(FhVf^QjXz~gv)*a7>Z@`2IDTgB1#4clrST&gxbM}#pM6N~?dUFr|q~~c%f~`fdMZP#pPJ<_@esS8$-VJ*jJ*zxc{nTh?;*Jw% zsOf=9h0L4uF6`0AflkF)83}?I^ymjt^YQ>12ni5h7GxE@QF@Vhzvvt~we*5YRXPn+ z7Jw~R73m@{3YYreyV2mKWI!4G_fVShW@UBvMrF(>5)-X%Gj~=yUHl7&QSWK2PPyYT zhu)lI^se9WVDs*qvQ~usx3bj2LLUxz8$)>>$pCo<_Tg7E&UvaIrVuyHlZ41E%RMQs zZQ`r3NhuC*rTmXe@|P?qf;@rMJfDT;uNl9?U}J*Qw9e?t*pss6fos>_adBv@yDpJ= zvjVgHsoB%lZEDUnae@8qSnsiCFL#;bYg^@SX9yKlHp349Lk#Ea+aX^!4L;&_qjyLY z7Jsx0M#&l=kg-1iX@0Irvuhh6ZmD2d7*;GfV*%25AW<8#Yo7 zM%wQRo;CpUl3)?^mz29pdv>7*DN(o#1`ekC65gLyvNzi@OJC#zGxD%0t0L@YqFkL* z0n5`_?1}Mz%jT7mz^kI^0jB+v5^qo_JTv_>>7O*5XT< zlW+ysGheiDn?rOITgx`^oV}sy_tSDqGyfQ8PfML23ys*XVq!AW=eqxVu_Goeb3xQI z5o2;Jlt{~SvdV>~=zZB0cNb2T+kAOqxvxAM@`k>tIaxtgEmh~F7ffAmo}QUez?(B! zq3t~HqE!D&=Vfv~{2oXwWkHiHU1ZQArIGz(OQT7z#vXtXu*Lh zNw7+fr4VU$;|RXmO@;9TSW{6lni!#G=Gd)`=dsz(dKj4wnI7j)oa}DH7CD? zD2vN{Zna!*sLT=m`Kie^r2_o>th`uuuEl!kk#&M)sYzZ@T&B zo8G?WAA3`(suTZy=iQ%ta`&qFwv5)fN90%9ndH0t&e!i>Gb8QrxA|Mgrks=?pSxvy zrfdDxap5VMOXKsCoy#h__w`Mi5ABFaeEfJ_4!FJbpn8EBvj7qk#3|-BTuoTzUAuS7LTxpIY;^$AI-Wkr(@P~uWLq4c4kz2O>nb6I46|* z`PbHj34Yi@MQ%>{CK_tmI^&x`+|e-8vPinV#M+~1)t47m2#TZC15=G|ifk2bV2@2^ zhlwXWbsb5DtfH(;w>8@$8l|X=UCUmW7X?`qYqmKi9d8WPyF8b0qr+(}wWn9-&&k7;+(w6wJ?3birdl`x|+Bn)*X{%^*Hpd zOOqr|p-0MfnUd3!@n>{rOCEOoY(5y%Ilvd(h&}Eaj6aYvfh!HAGWCg808%E#0YNbq zM|8r3J`?o^NtO}nQ9&I&M%qf07bG!7!&X}3t~V<2F|u%An8;%CvaJdn>|Fl* z{Ah4cKuftncqnjiDL2}kwo+SqjS2@f>9(NF;V`mGneL3q03fihtRbms4G5+O7i0hk z{PX?uxHC=#0*jr1pooCLtO9|_l_z)v%UN@Q5pP(rbxl~$E~(@XfII^t;8hIVZZMZ5 zW&b4TiI#-$Rv}~xf}tRWIa-G)AbHEGL=e>`-HgH7kjEpKOTCVUnnq($mwb=>>$N{G zTHtidd~C_ic~5}mHd*xgXC1z=V|!)Y#fx_}=31Hl(vOd@z8_1jicmv&(B8rQr88TC zwdZcG)$0n^Hq6c~(no(%m^9s=uTOc=esAb}XR^VNFxQu9OY!5x-6G$SWQbkGSz=*Y z6!?4kGS&|-LncRB!R*2Z#QDwVTvfAp^PE)mOhvJu+5nn)J?uY|Y#W&T!0(fOX<20k zSS>mIBd$Jh`=lSxBi!Ge@e6XuR??gyl#mhaQslCsi$I62%0znvQ3_Q4C%yiY4_w)AJynX_(SpIo&5*5 zuJg_7z=a^?c*2NfST3Ty zz>Dfnxxv(EbQW#MfJD_4gfzpdeL5n#uusA2qbxPb8wDd{K1!rtFG6~qwzPC?tlX$q zDS#zAi;`p0M_W5(5y!HGy^2DuQyXY0=OFh8(<=?~2ust-)6&W>%$b^haXOXYX&Kj+P>7RPj5xFva7d9tqzzkXkGd18re@WLx*MI|?dk0md8 zaPL5yO>U@et)AXKosZ7_R_pw$%8J)?gjQuh_*I;{jCt#(R?45Q5vSy71(czXqVm zr~>{W*Xs7^bnq95Nhd+b*g%>|I9Ds=XpaNl7$9mbK)DJnAfIGt22BE}FF>f}bV>9+R zYUiLRxWa%uP0bQ>ah)|(A*NZf>WdiUZ1~}Lzr8*&=uNbgms_JU;zKDlP7IeqOX(CG znyKuaPHzJs{0+hYRI(Qx=wTTc8{!p!ys!&Ej^K0q!5knV1}Rw#R0#&CH+%(^2aB;P zrlDcmZT(VHabsm;V6DFYwrvd!F;zy(_)nQ(u|oc06b)U*PRr^q**)(hghsoz=xf9KeN1C;PJI6N2f z$gI9<$wKo8m@G_z9t|(c0LQ}>g^$fFq*Rm|XxyL)&`jd7VF!W!LMG}lSZ$J?%`yt+ zygSYpvvL>C$z&{Z&VqcuwB?R0G&a+iU|Ii$G(UevEMu`V@?jjBms#SUUp-@u{Fcy| z+d$C`xsAfxKdubf4Wu@xnE9X%&N+uY4;NbV=Tez-=ND$=9Xqx%hYytEi_

5q!RY z*BeMp5!YRitn`g&nth8{m6Dd0QYAj0ZxqJ;!r>+5bAHQflhf0aYx(Url?1GY6U}5F zylvy$dA2fK(`58 z4KJ8nnOPF^3Rx@@8g_Vg6GI*_Bng?U4A#>qx-1Jv@{q$QbMPz!SyL+_iFRlz_(NHK z0V0O}tchz`Cb(6e7?+~x9pfb%8)c-+N~ShwBa6&z&P!?UfKd=_feP)X9~S=&MC3F( z*fN(l@lMz-Sg_16J{@jx<&VV<$8Y)g2W-?OuM)0zALCcypa7@C54l}4jp82+hE{_p zzbA6zM`9T_Oj{2RAI9}Nc{4Y$2PA<_)4TPX&X=UEl76Wmy`q=?CUS>c{DGdm^`|%G z(s%#%Hrw?koB7l6V{b8-VY{XAvxUrI5`qnSe&|K^v-^%e^oLtN=Nq48kKc0Q$&at- zZW5)*hobU>eO7s-$XtWXd)6mnm%lcTUi zK&*foQA{K#vaRajK9rcS7^w0jBmjFlBtBqCDQ+x!lKgTGJR=daf)T>G+sSz z>3!F|bshfrxlql3dksJ;yki`JCk>MLXg+mixfSh^nFV61GuCX5b*731Gb8O4vs+sD z4ZYW1+uL*PwerFv_UNOOT|#!KNGU?!W7<_aPf)(m1c|p*IQ7F$KslqsvIdML5`{$z z0qCeH@IM!*f^8%E$}_%2`zkHzlwXZbDe}9@bPMTFJd+e=i*a)@X7LHY13w}nwL}8*;!Y- zX2blTm}2po@Xu>WVIroz;-*=>PVN;djL-t96631*$$`%G82II>ph;?=TR4h2OMLSQ z2;d3;a80}nlz<;SHDQ`N9Q8jut4l5tVPQt5)YGAfWfy`Xy6Bw73Vm@xer|4VenPRn zqA@3W4m762OLl&L=g#koX_H0iV;tizI$~lRyxb8pIi6uPkq;}DBs2pY@?nAnJs^TD z8|!JS5EC74lgaH!6f4?##+LEvRQOK$x77r0bYambGsZy|W;q?ZfFQGZ5=^R43MD)+ z6i<$Qt^anS2UQ>elc`i$>dK&I$F<#sLe2x&ChT#9G~oMJ&o1ngsLNFmOi*H=P&BPU zE%f!18&NkWEbGE^zTUBW{);XJ1bwMMA8S@RNVDicF2Bdt*M5m!(Yp7|v1MQDVfLib zz2nWNI`Y#~z5BOQaVG)<*(#Jz?qZkt@@afP>W-7vV$y2Q#<~IOO|h;-EJ;N!4Tpo^ zU@8)hpk4hC!wy5Z)+7DJvtx7JcFpS9~Tv{OBpIM#U2D zk8XI`IcLd|InI}FIB@^{{6VN6P;wTAVBz=ve3qTy(=>t;n$`JeDcSLbsnk>E0m)Rm zW;_r~w&+rLE)V!M3z+;R)%Nb?WP5k7{P1TeUF_R`TC8z@?dLmK?~c#!(i*JSku2pS z--8$Fh@<%s*^)j0|Hg>bt>QjBE@Ipwk1==?343tLN;5Apv7hZkM!Shz~&+WynJAc08`uE`A{YtbCi2_ziC%N89v&j=UV=9qCt+GB%BC8;6h8AOLkTMEk zmx-ycsJ!u=#_~lu7w>+0_wJ|J&2VsFBTHw1WwLR$zLvoJ2*eqifiaekEnhy?+g>qu zZUvMf6i_~XSZe<2FrZa>nW!ptu~C5*5DIxY4HuAXNgnh}=7P5nA$+QwLt^``9#_+H z`mfOG+2|DlO&aD@zvygqs~}VbIiMpZi`#jGF-KZ`QT1chMfGWp>G|yL{OMzgD2xcf z&2eS^aeS+cMN(CcBrQxb--Af)ayk_`(~P!%i4=x2Cw_f+-HJeUbzsH1aM}F%>=s2% zM?Q*#8b&>34M=@f(d_9+*56D?Cr|Z%*N>-GXSyHS;W-Dk(&ZigO8Ro{e)| z{{oOe9gI!SmzU>HpVXWG_x(8bB|uKEg4`tZS&zOeJJplyEu|O751;DAFHVI{_uT2Y z6Ay~b#|bRYM44Q%QFaXTC?4xNd0&1-8@TY3-3 zAO33h?)O>J{;hv};kxBFUs|-Ta#}6_1WHvE^7Ha@@(<-7N99dz$V+mztm%#Hmv<&K z_OGe&&wu#3!(#WjKp8E2Vr{y2@G|Zkmfe#|!58R;hVaITt?gwBL01ilO z3ZFxoXLNL_9Mm{*e31+Tuo^8#Vy7NKITuBG1;>E_=_lK;$bl%VrP|4lA`n66UO>>; zpAzE?H7L6DBr}1{9C5%&p}?Iip-(U^m1ib7u@_Ve$B7W}G$G9eeN%KUjA3F2^CMpj zvrcdO;LWT-zsonhwPf=-f#p2T?lwu&)02+B5bsY<5-Z~UZ`Z}G%5qu^PJba{q69~t zw^lIQDm{`Y`26svo|_baJZrQ*Ve_>mGaE|ck`i1wfvGuDvl5*~yP@+UWrg#?xstWW=82!@sC2}|#8tq6 z1uss{tST(5%51I5b4wBzoR++2wv}z|>)jj-0_YgN!Z4Eqh( z#6fa_%rF{Q1v5Y;0ydA&QhX3^yT+8|J8?KE#u@u7&SESEi`)VT={;J_d%r;+;Wzwy z`F^YXkR>tBFoVH5i)5BB`N-3CTL!=3n-mH#v0$Eu)+w8El3a>)m8>vm`-(DXhJ*72 zfB;Ys@uq;74|>^vV{n17eegk})k9i06F*LvrJ-`HvSF-#DuPq%pM?4DF;&QKObL%2 zQT~zg`_%RrVb6)tnD(jjcNGXaiW=7y?3%yx$tQO{E`P}kk3X`5zd%pp6+76as&b8@ zU_*`m|Ge#d&-nju+s^jL|4-T;DkW>X|8HSt&z}Dqh|&C2D)4Sn=$j%~7X&3a0qO9yeGA>hr{%c;twgFkKCw@86vM zU*w<2r`PgL+@u=xvT6$`$KR7uhb^|n?gu0S&eo_F*ooTumu!(V= zZl~^Y-G1Fc-EF%2bl=lGMHYOq$2OcI`G_3II`xEo_ry70SQ(#iz^~oa@jCrH5kGmy zJ_W2ETHF<&An7^cLxTBu8f*fdiSj4%Pu%}i`De#ZJnPAUJ!rq_HRHOP=`LF}_A0y@ zcK)Ih7c197<+^uLSd9@EtJFHUXa_d*&MWN7@mMUd&Llst+&mekM4U0rm5xH)b?j@o zU;no;YHjSuk-J8pCE9(H$I~C>^+r80de;&59co*2;iRil))_J5r?v-tY{P*CF1zo{ z#ubhP(#hu%%uP%xM=f*lzl~ArQudG}>!_1ttj*QX_1g%DP)J0dO3L||o7^TqmPPqb z=F2lc$0-yW(U8RE2lYqdqG7P}v7et1?FU;>Igx^jJ4xB%bOYQ6I?|w14k+s==dU<; z5{^Zs#Cqfto>+)aAK}UJU*9nzr65A9=B8&Jkzf4YxyNp9V(f=EL6S{iM$R0@eaE&M z4V!+zgez}lMepqxKepqE9Xp<2xAd$tg0}G*%$2pH&u`p$#AdFmF&knf?ld;_aN(l& zFTCoXSF@GN2i|U7y}I@7{uOsJ-RJVT%LS{cINAqZ@*);^>|s`Lr`gbZ-|xqJBoD(z|^>f}mZ^yAq^oCu3R%L4-r#J=<4Ooig-dkn*oo4Vcpo!xc5B0c5-8YXx z9<_P$zK>ykW1Gpy#<}k7{oBM*k(&4D5!!vz1!Jx7UlbpNg3bzDughUkIULxV_62H7 z&e$4jd|Sm4Jm@!a1&{r{fX0m#A)izODZ;2mMy?5QEHV=2Dxs#qx*uFl*>@IxD zH>5q4SAJR4odE;XpDK=5V2K=Ie~qj!WP$M^`4y@88)$ge!Gkz5eC?a)b>h|P3>@nR zOyQ$H3SmF`hq^b=Cw`dw@Icyv>?c9K4I4K%+6W6p%q!19G?!yjT2)z|)GK&;jrWc$9ufXrw99RU~#s+9!Ivp!ekG66gjP#Z3p< zWrf^OC6;;=IT?@oUh;VTS#}W!29oPYf&h@xSz8^+;>fmI>_Mlz+UPYHjRvpLa46lH zZu48M>TN4U8H^q$+mm)p*k35lnP2Va9)nA77bL;(oZ$7P>9bePaOGO99DY~?A+KC- z-mr9PZ(_0`qco*pxjk{J(-z2b720ezb3uuX;|we_InI+FNlRV*h?Bv*SWI4S4un}v zz9?^bY)Xs`PKC2KNG#E26O$p??%<|$?upBF*=??Z=O0a3zA2%or)zrF-!YI6VZy1aKN#^Q>N zho*lbG9`&ZV$+_G-Q(;lDolHHrqg1Lj;r)Uxuzv^y@^Q<39iR-GD983og+!Pdc7f# zGkr>3ZE`q1HaYCi_gUf|WTxie_VRVhmI$0}{U#995sm{M1Psmu+(nVTFiG8&3NFY6 z0#d-lBW`Auh&UWFA}T#q3emX3@)?>wGE8 z8^(W`=#XZQZ^VJCzzb$w0n2^QY_AV6c`iuJ$LIU2sGt9MDY(51x|P|XznE%2NWz97{`x-sjWl?W*k(jiGvfG zDiDdSL_&N6#`n?<{w!D}jB=H_Aa-0RrKP7q%Q#T#ff)y|RTQm_5E7I@=;Q19D%Uf{ zC8OPB!tNcuieO*U0@L@RAnGN(5ofW--`}>4J-FefM7Q-&Prr^L!vqVlSbzYxi?9i!!v#fD(@+Ji>SV#- zhrj^|6jX77FNHXf^jV~GO~?b8NYf39?)r3}PJo~<{Mq1@w@`q%2GVhCca;BtyKn|< zXhe&f^^&dd{GQR2s6(}EvApiiIG-Rc&6Kv~rR66}htK`F{QgbX$ba3C?3jA{w|3`b zr)HZ(;ryT6vaLaMl&78Z<-=EJW_r@$Of2-8JihypoJ%i0FDvWHEzf;A#~$DC>sO1@ zX06G{ByTx$pz^MdO3wuHD4f|7ND{bIkzEVtS4P+LTdKKbNzU%XkR#1^2o^jl4*c@i zkC29{1%^*IPcMLXz>*_ytsO4p+`P+Gs}46yzb`8j?$VKy(qAx%uKT- zrgr|+jE#S()aTUJ$Hh8LuDF)imQ1(UeDk^*i`DCIW9Kr{?)k6De;iJ=#KUOuYS`xs zoY%c3KHl2kzvRjtxw$;X5g(h7U^S;qHTw2n{?aYOZHZ})IaB=$hUEr~U*<`x{vGMB zIH@WI1-e49IE7__@IRvQ?2sb|1@$Qf8OgCH^+F}um0fT-Y0Kv<)7!@Q<0VAPVkx~L3EgHnVH!c zsj)UT{*&!bw8WO~IKsTQ=B&usVtY;ACCk@aZ@x7F?j%!Qdzub`o>p)AYhG(JE_&ea z@~to2%nJVc`nMuE-etEA2dX6dX$S z?24eHO)}jB(9OOQdfE5G_7CJv$wDR0Q^|5=>Hqebte64SYEojbq#NTV`3J?vEy+FL zEa89kd}PpB?8F}|a{k-9_}%jC6GzBqs!*L>4#Mbv&Y~0vmY>t<^x^lPh7Ny)3d*x3 zs_eLta-xLK|A#w`4bv52eOrX}?JA-*0j;27Ag1Gi5TB44g=ctmEu!r-9mU|CVqzsq zf(9D4&=aD5m?c%PVO#);3D-sq!N=zI}Liha5PM|k0Bvc zhE$6D5LJg|Cey|;!$_e|zT*k6&1MgHpD42hX4*RBKfmVWv8g%EL9iPJojIwo-1(aP z=MLMENC zlPJHW__Pcs<(lHzEvY@WQZE{{;jq8doXPTUlwbHXIyc2-j2?T7WC7nAi#EDaa-%A-cnmns=lx&RbO@RAPk%5=Soykq1~<)B)@SZtN7-EqHFDoCGNR7m4^nhuYq9Tg)YmlhQ)6kbmT-1T^(v4)5SiTP=d47`;gJ!5Fx``YNp zd$)BP5c=8Z4a|KnnPL8=7_8`9Y zuK~nM0Zg)GW#R`jNPe9CPd0sY>O7ug0)&TeDZT%ml7|+=d>$juV8s{8ud#PO@BEBy z|H0y?`7~P46`W&C*()jdimRIQ))>^fOn&m3paOu*0Flg z(~H(Cxsd;KNqqA+P=(mDo@9pA&{4OJcXS`=KE*de6w41m zS8OY=Wq>RtCWKzuVnB~s-D?OjdSwft>=M9@P`DCd5(W=@1Il_&s}49BSbvbCiZKu7 zoMHu5XIJ?an5Gno35N*;4|X6BD2bW@l8)grnwKcjbN>ei^sP>^eOfPJ#S_D(gwGYI!YV=NrJx&muiF}3C zkd|Y$;4&VQF&&F|bTqD#=(3jA_^krX3jt|*QZdZv-x!x;ArzOHEl`|?)ybUsBt~6te+nqYz>vSY0 zOmjLN;VS->=yW)!8EDM+9dKG2PB!OHMvL9x@JIi};?MN@jd$K;N@9Me{AFUOJ=SCs zQtnJvD~s35??&as8l&hUgu_->bai}!HQF`K66^fd@>;jc%BwfZU(TB@G_IH6;do|2 z*X%X+jaS}WIrZY9C8lNPS9r@}3^h%=XFC@+ck)4Zi5*|9T+zTJxCh5)i>?z>+-ag1 zlbt4sUSUJRbbNL~VpW=Re5oT&6r${oczpaZPuS@&=ZAf;`mc*+e%c8s|B7_YS{Ob! zba!fDj-A90wXgur@8?=r)LB@(7M66d{iB8Th~KP*4Z1}<2P!?d3I5?tC^r0IDlxvsr=9`9!^0Xn{M8i6eL(Qq?p=at& zDr*RJv?G0=(rrD6Ye6iQ2LwP662wfN&*9^dj_}`n@e@lv${JnXYSOWDt5i)VvlImI}KE{+kkt zFj8u-^edxPgv{SmW>GIbvVS;&_X>?ew}17IKZiFAl#qZ^!acf6amI9&?rPWy+N-;g z5xR!ERY;K=m=WGt&CG&bnhoTpgE^rB7|mSF&0?_Vd08y{wZyXoNLwUtLO%i*>UNtOv}uKIl^putByFHc*Dy2u#9mVw>TOd@I|=&cVj` zJcv(jXJhOFb|KrrE`r;^U2HcbNiKov>K=9(yPRFYu4GrStJz+54co`|vjgl~Fv@lv zyPn+uA3+CUq5CFwnBC02&2C}0vfJ40><)Okx{KY-?qT<```CBb{p`E!0rnt!h&{}{ z#~xvivd7?V^$GSQ`#yV$JX+Fo>{S@i z{TX|m{hYnQ-ehmFx7j=F7wld39{VNx6?>oknjK{yuw(2)_7VFHtf~GEo{K(ae_(%P ze`24oPuXYebM|NU1^Wy8EBhP!JNpOwC;O6p#g4NRY@EsLB-e4qITyIdB@S*1H|o;3 ziJQ3v-hpf!h6A~iNAYOx;%*+pJ>1J;0=5xpT%eM zIeadk$LI3}d?9b-i}+%`ME5#h%9ruwd<9?0SMk++4PVRG@%6lkH}e+W%G-E5kMIsC zJ#_JIzJd4fUf#$1`2Zi}8~G3)<|BNRZ{nNz7QU5l=cIDdja$-mE^ z;!pD*@FV;g{w#lv|B(NPKhIy_FY+Jrm-tWkPx;II75*xJjsJ|l&VSC|;BWG`_}ly) z{tNyte~Tgu$p6GY;h*x)_~-o3{0sgU z{#X7t{&)Tl{!jiT|B4^yCpdIt`AIE`oLaLA^qzf5Brr;N{glr*4$QAO0e4#)9FHR^H zN`!z=DgxA_}lh7=*2(3b!&@M!T4xv-%61s&A zLXXfZ^a=gKfG{X*6o!OhVMG`eHVK=BEy7k|n{bYBu5ccdNVW@O!Ue*G!VcjgVW+T5 z*ezTvTq0a5>=7;#E*Gv4t`x2kt`_zR*9iNB{lWp^Tf()%b;9++4Z@AWLE(^alWwe&M^q1G;@uXK%~!u+%p?+})-hjslmcibZtxav+Lv6hg)HxVw88Kj~ z236H%q^2kZ_71f5h#kExoo0MY`(W2Ve`MIaX`pwsFVckeShOHjVA8^)gZhm_Z3FEQ zLo2!icVVQZQ^aprY#kWrG17%rcxiB`yMILA*3uUlY7uF9#rxiNefLNU7DCHNWXniX zSA?iQvl8Ci-9FM~#=Fk`rrt=$h*b?@$sCCcS=0xGGPJ4T4Wq*&-5py+`W8!fe>>8t z`LwW-*51+57NK5i+SJ`1888fXw~dSrMf8J_{lgD8Hz}4T@myU4VZ0sBr@34+S1muxn-!`*3p74oOm)$1Vrj|X|M%A0Kga+G=Tb{ z(zfKalco=rmo>X+Ll9+Xco4fc)>HxXc%`?~wJphX2DCE761qugy9 zM1=@NCh9g$=SATbZr_y!_{n;Newzc#|`rBKE^h4Mx4D=b=2KxFi-uk|l z&i=@Vd7{5Y2T%1QwGZGvvN;kNvEkDP2dT(5Ojv6NpfEC|R%X#2s0j|O;hQ2uAV*tz zqqOI)fuZhgL>=~;0P#(2fQu39$mZ@5z@^&p1Y`vE%9B-v_$E|7G$8auwu+d|!$z&i z!?uyG(Z1Ha4sG(Jb0~I?^HBv8dP`{+icZ&kzYDM;m$*Vq^ zl>|y=gZ9D3iEq`bCF@6lhT3{805MD&>fm-^Xn0uYYHv5T0vgbH{bFmRx7X4}-P(bU z9f_E`FpNzqbSpuc?*=6_I%rbv)FDwSa5kNW$mla-lmZ-QM2!xfnTd)44j*WZ=r<2x z&UZ;8EyF#-dSF!anW=TCJJQjHO^lf!SDhzP=g`3DAka#Gj|6}mZP&L(T7V&hw$Tv` z<=|HHV9THaKiz}kF!rxz8l9$A0BR2)ZeR$&#YcPjKrb-HPX@;`+GER!N6jA3M}8GRlZX`(O1 zJfR>asT!bewWvX*uP|?b+53mZ;ejE58ZJsUgA&5znONBfM6gDvuqLA20|1y#z<)cI zq}Bn9u|)%CN@<+{ZF(RaKLU6i!7gvm2uL5o*tY;90_T~5+q-}?M|)e1zzZ1X&WK&< zVx<|hbXnC$6;chfls5IXTab68YhW0iA2AM(c8}1A840MUMtvI=sz?MY%mA=5t(3}g zLZ8q&+TDxU(rHBIL0WfAEq$oHrN1qr?~AnebdOj%s7a`0Lj+BaU>)dE`d#cO?ubOS z4~$}lfxL!=I@5dA`5q|4BW)qSv~-3T(N#XWN0tGc7k%CGBuR1L>hY|AZH0@r~w6H(Zn`&H8Uw_or*%qB>}U#whBE%n}ybqHX@TFrc-m)soc#gzu>60&Z^YC75)QI|ID zLEM62Hqk|iK9z<#)6fpM0Z|Q<4gzojd4a~lbLUV?pS}Y$ZO@R<(%vt2l$4d&Tf0YE zf!KkK)nNc8>>aXOP7_nMNzbE$liw0tIVZhUr}$=&xdWSr4Vb1w1KsTs zCdTL%G_$*v)|TO(t%F$921bX5H;!Ua0673q8PInCE%!!5y3hhX(mf~)kJ8YF!v@;i zbZ?3Xt)rcMQ;)Pc(%m|MjYB{Fkf1DJSH2z7LB-q@7mQIqU}6pKRY`Dq6}GnzfF4k` zA6n;^m0LG~6bDtRv;@aqncoGP%W(%1qF+dDOik5 z!D3_z7E`8@V!F`V63SFUnMzPiumsfvODIPPqGQmzuQ!q?9!juDcjB%kH zVXdhR$~(#wF2j&?DDNm!8NDc@Ol6d*j9!#cHDy!{B%P7CjY3pS8RaOa9OaaQ;37zH z5hS<>5?llcE`kIXL4u25IpwIJ92Jyz$GYl1e9R}P#~ndpd17gApiv~$Ppr- z2oX?(icv?X7ZaA%cidafP%g0$hq9fkcSP3K2+z2qZ!T5+MSK5P?L9Kq6E^ zl?14g0OcTH2oW%Z2pB>H3?TxB5CKDofFVS{5F%g*5io=Z7(xULAwpjvn6|=&a+Fez zQp!q^DF+4}7s?T?KyM=lE|dd@ekAZhiUx7H2z^4|8PK^ zmVp|rg*ED&57Y$Ime-VOcXh%AYP6=-s53uMQ>MKy*X|SL)o9PP+PzM@*K79~>b+L0 zw^pmSR;#yGtG8CGw^pmSR;#yGtG8CGw^pmSR;#yGtG8CGw^pmSR;yP-nt?j4-a4(` zI<4M1t=>AV-a4(`I<4M1t=>AV-a4(`I<4M1t=>AV-a4&b4Yvj~+#0CY>aEx6t=H<+ zFl<1>uz`B5-g>Rxdad4it=@XA-g>Rxdad4it=<`0KhO9-gZkGMYOgEQURS8Su2BEF zLjCIsN-365OI@Lsx + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.ttf b/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.ttf new file mode 100755 index 0000000000000000000000000000000000000000..35acda2fa1196aad98c2adf4378a7611dd713aa3 GIT binary patch literal 165548 zcmd4434D~*)jxjkv&@#+*JQHIB(r2Agk&ZO5W=u;0Z~v85Ce*$fTDsRbs2>!AXP+E zv})s8XszXKwXa&S)7IKescosX*7l99R$G?_w7v?NC%^Bx&rC7|(E7f=|L^lpa-Zk9 z`?>d?d+s^so_oVMW6Z|VOlEVZPMtq{)pOIHX3~v25n48F@|3AkA5-983xDXec_W** zHg8HX#uvihecqa7Yb`$*a~)&Wy^KjmE?joS+JOO-B;B|Y@umw`Uvs>da>d0W;5qQ!4Qz zJxL+bkEIe8*8}j>Q>BETG1+ht-^o+}utRA<*p2#Ix&jHe=hB??wf3sZuV5(_`d1DH zgI+ncCI1s*Tuw6@6DFOB@-mE3%l-{_4z<*f9!g8!dcoz@f1eyoO9;V5yN|*Pk0}XYPFk z!g(%@Qka**;2iW8;b{R|Dg0FbU_E9^hd3H%a#EV5;HVvgVS_k;c*=`1YN*`2lhZm3 zqOTF2Pfz8N%lA<(eJUSDWevumUJ;MocT>zZ5W08%2JkP2szU{CP(((>LmzOmB>ZOpelu zIw>A5mu@gGU}>QA1RKFi-$*aQL_KL1GNuOxs0@)VEz%g?77_AY_{e55-&2X`IC z!*9krPH>;hA+4QUe(ZB_4Z@L!DgUN;`X-m}3;G6(Mf9flyest6ciunvokm)?oZmzF z@?{e2C{v;^ys6AQy_IN=B99>#C*fPn3ra`%a_!FN6aIXi^rn1ymrrZ@gw3bA$$zqb zqOxiHDSsYDDkGmZpD$nT@HfSi%fmt6l*S0Iupll)-&7{*yFioy4w3x%GVEpx@jWf@QO?itTs?#7)d3a-Ug&FLt_)FMnmOp5gGJy@z7B*(^RVW^e1dkQ zkMHw*dK%Ayu_({yrG6RifN!GjP=|nt${60CMrjDAK)0HZCYpnJB&8QF&0_TaoF9-S zu?&_mPAU0&@X=Qpc>I^~UdvKIk0usk``F{`3HAbeHC$CyQPtgN@2lwR?3>fKwC|F> zYx{2LyT9-8zVGxM?E7=y2YuRM`{9bijfXoA&pEvG@Fj<@J$%dI`wu^U__@Oe5C8e_ z2ZyyI_9GQXI*-gbvh>I$N3K0`%aQw!JbvW4BL|QC`N#+Vf_#9QLu~J`8d;ySFWi^v zo7>mjx3(|cx3jOOZ+~B=@8!PUzP`iku=8-}aMR(`;kk#q53fC(KD_gA&*A-tGlyS3 z+m)8@1~El#u3as^j;LR~)}{9CG~D_9MNw(aQga zKO~TeK}MY%7{tgG{veXj;r|am2GwFztR{2O|5v~?px`g+cB0=PQ}aFOx^-}vA95F5 zA7=4<%*Y5_FJ|j%P>qdnh_@iTs0Qv3Shg)-OV0=S+zU1vekc4cfZ>81?nWLD;PJf5 zm^TgA&zNr~$ZdkLfD=nH@)f_xSjk$*;M3uDgT;zqnj*X$`6@snD%LSpiMm2N;QAN~ z_kcBPVyrp@Qi?Q@UdCdRu{^&CvWYrt=QCD^e09&FD^N$nM_`>%e`5*`?~&bbh->n~ zJ(9*nTC4`EGNEOm%t%U8(?hP3%1b;hjQAV0Nc?8hxeG3 zaPKiTHp5uQTE@n~b#}l3uJMQ)kGfOHpF%kkn&43O#D#F5Fg6KwPr4VR9c4{M`YDK; z3jZ{uoAx?m(^2k>9gNLvXKdDEjCCQ+Y~-2K00%hd9AfOW{fx~8OmhL>=?SSyfsZaC!Gt-z(=`WU+-&Dfn0#_n3e*q()q-CYLpelpxsjC~b#-P^<1eJJmK#NGc1 zV_&XPb2-)pD^|e^5@<6_cHeE7RC;w7<*1(><1_>^E_ievcm0P?8kubdDQj%vyA=3 z3HKCZFYIRQXH9UujQt#S{T$`}0_FTN4TrE7KVs}9q&bK>55B|Lul6(cGRpdO1Kd`| zeq(~e`?pp&g#Y$EXw}*o`yJwccQ0eFbi*Ov?^iSS>U6j#82bal{s6dMn-2#V{#Xo$ zI$lq~{fx0cA?=^g&OdKq?7tBAUym`?3z*+P_+QpC_SX>Hn~c4gX6!Ab|67K!w~_Ac z_ZWKz;eUUXv46n53-{h3#@>IKu@7En?4O7`qA>R1M~r=hy#Got_OTNVaQ-*)f3gq` zWqlf9>?rCwhC2Ie;GSYEYlZ8Edx9~|1c$Hz6P6|~v_elnBK`=R&nMuzUuN8VKI0ZA z+#be@iW#>ma1S$XYhc_CQta5uxC`H|9>(1-GVW=IdlO`OC*!^vIHdJ2gzINKkYT)d z3*#jl84q5~c0(mMGIK+jJFO2k6NLvlqs#h}}L0klN#8)z2^A6*6 zU5q!Nj7Gdit%LiB@#bE}TbkhZGoIMXcoN~QNYfU9dezGK=;@4)al-X6K6WSL9b4dD zWqdqfOo0cRfI27sjPXfulka7G3er!7o3@tm>3GioJTpUZZ!$jX5aV4vjL$A+d`^n- zxp1e$e?~9k^CmMsKg9T%fbFbqIHX;GIu<72kYZMzEPZ`#55myqXbyss&PdzkU-kng%ZaGx-qUd{ORDE9`W-<*I${1)W@@_xo| z#P?RjZA0Ge?Tp_{4)ER51-F;+Tjw*r6ZPHZW&C#J-;MVj3S2+qccSdOkoNAY8NUbR z-HUYhnc!Y!{C@9;sxqIIma{CrC z{*4;OzZrsik@3eKWBglt8Gju9$G0;6ZPfp5`1hya;Q!vUjQ{6qsNQ=S2c6;1ApV)% zjDJ4@_b}tnn&43HfiA|MBZsgbpsdVv#(xMHfA~D(KUU!0Wc>La#(y%O@fT{~-ede{ zR>pr0_Y2hXOT@kS3F8L=^RH0;%c~jx_4$nd=5@w@I~NXdzuUt2E2!)DYvKACfAu5A zUwe%4KcdXn;r@iOKr8s4QQm)bG5$uH@xLJ7o5hU3g}A?UF#a~+dV4S9??m7ZG5+_} zjQ<05{sZ6d0><|ea8JQ~#Q6It>z^jLhZ*lv;9g|>Fxqwm@O+4TAHKu*zfkVS4R9I8 z{~NIVcQ50g0KQKVb`<_&>lp7xn*Q?{2i@S=9gJ(JgXqP;%S_@4CSmVFk{g($tYngU z2omdDCYcd#!MC-SNwz*FIf|L&M40PMCV4uTQXRtTUT0GMZYDM0-H5Up z-(yk}+^8)~YEHrRGpXe%CMDJ}DT(-2W~^` zjDf-D4fq2U%2=tnQ*LW*>*Q@NeQ=U48Xk01IuzADy1ym0rit^WHK~^SwU449k4??k zJX|$cO-EBU&+R{a*)XQ6t~;?kuP)y%}DA(=%g4sNM$ z8a1k^e#^m%NS4_=9;HTdn_VW0>ap!zx91UcR50pxM}wo(NA}d;)_n~5mQGZt41J8L zZE5Hkn1U{CRFZ(Oxk3tb${0}UQ~92RJG;|T-PJKt>+QV$(z%hy+)Jz~xmNJS#48TFsM{-?LHd-bxvg|X{pRq&u74~nC4i>i16LEAiprfpGA zYjeP(qECX_9cOW$*W=U1YvVDXKItrNcS$?{_zh2o=MDaGyL^>DsNJtwjW%Do^}YA3 z3HS=f@249Yh{jnme5ZRV>tcdeh+=o(;eXg_-64c@tJ&As=oIrFZ& z*Gx&Lr>wdAF8POg_#5blBAP!&nm-O!$wspA>@;>RyOdqWZe?F%--gC9nTXZ%DnmK< z`p0sh@aOosD-jbIoje0ec`&&fWsK?xPdf*L)Qp(MwKKIOtB+EDn(3w-9Ns9O~i z7MwnG8-?RZlv&XIJZUK*;)r!1@Bh4bnRO*JmgwqANa8v4EvHWvBQYYGT?tN4>BRz1 zf1&5N7@@!g89ym5LO{@=9>;Y8=^ExA9{+#aKfFGPwby8wn)db@o}%Z_x0EjQWsmb6 zA9uX(vr-n8$U~x9dhk~VKeI!h^3Z2NXu;>n6BHB%6e2u2VJ!ZykHWv-t19}tU-Yz$ zHXl2#_m7V&O!q(RtK+(Yads868*Wm*!~EzJtW!oq)kw}`iSZl@lNpanZn&u|+px84 zZrN7t&ayK4;4x_@`Q;;XMO4{VelhvW%CtX7w;>J6y=346)vfGe)zJBQ9o$eAhcOPy zjwRa6$CvN-8qHjFi;}h1wAb{Kcnn{;+ITEi`fCUk^_(hJ&q1Z=yo*jRs<94E#yX67 zRj)s)V&gd0VVZGcLALQ|_Lp<4{XEBIF-*yma#;%V*m^xSuqeG?H-7=M0Cq%%W9`2Oe>Ov)OMv8yKrI^mZ$ql{A!!3mw_27Y zE=V#cA@HopguAWPAMhKDb__-Z_(TN7;*A`XxrMefxoz4{Seu)$%$=sPf{vT@Pf_T`RlrC#CPDl$#FnvU|VBC$0(E>+3EG z&3xsml}L_UE3bNGX6T~2dV6S%_M9{`E9kgHPa+9mas{tj$S<&{z?nRzH2b4~4m^Wc zVF+o4`w9BO_!IohZO_=<;=$8j?7KUk(S5llK6wfy9m$GsiN5*e{q(ZS6vU4l6&{s5 zXrJJ@giK>(m%yKhRT;egW||O~pGJ&`7b8-QIchNCms)}88aL8Jh{cIp1uu`FMo!ZP z1fne;+5#%k3SM7Kqe|`%w1JI=6hJJrog4j?5Iq!j=b=0AJS5%ev_9?eR!_H>OLzLM z_U#QLoi=0npY1+gHmde37Kgp)+PKl=nC>pM|EJCAEPBRXQZvb74&LUs*^WCT5Q%L-{O+y zQKgd4Cek)Gjy~OLwb&xJT2>V%wrprI+4aOtWs*;<9pGE>o8u|RvPtYh;P$XlhlqF_ z77X`$AlrH?NJj1CJdEBA8;q*JG-T8nm>hL#38U9ZYO3UTNWdO3rg-pEe5d= zw3Xi@nV)1`P%F?Y4s9yVPgPYT9d#3SLD{*L0U{ z;TtVh?Wb0Lp4MH{o@L6GvhJE=Y2u>{DI_hMtZgl~^3m3#ZUrkn?-5E3A!m!Z>183- zpkovvg1$mQawcNKoQ*tW=gtZqYGqCd)D#K;$p113iB1uE#USvWT}QQ7kM7!al-C^P zmmk!=rY+UJcJLry#vkO%BuM>pb)46x!{DkRYY7wGNK$v=np_sv7nfHZO_=eyqLSK zA6ebf$Bo&P&CR_C*7^|cA>zl^hJ7z0?xu#wFzN=D8 zxm(>@s?z1E;|!Py8HuyHM}_W5*Ff>m5U0Jhy?txDx{jjLGNXs}(CVxgu9Q4tPgE+Hm z*9ll7bz80456xzta(cX+@W!t7xTWR-OgnG_>YM~t&_#5vzC`Mp5aKlXsbO7O0HKAC z2iQF2_|0d6y4$Pu5P-bfZMRzac(Yl{IQgfa0V>u;BJRL(o0$1wD7WOWjKwP)2-6y$ zlPcRhIyDY>{PFLvIr0!VoCe;c_}dp>U-X z`pii$Ju=g+Wy~f|R7yuZZjYAv4AYJT}Ct-OfF$ZUBa> zOiKl0HSvn=+j1=4%5yD}dAq5^vgI~n>UcXZJGkl671v`D74kC?HVsgEVUZNBihyAm zQUE~mz%na<71JU=u_51}DT92@IPPX)0eiDweVeDWmD&fpw12L;-h=5Gq?za0HtmUJ zH@-8qs1E38^OR8g5Q^sI0)J}rOyKu$&o1s=bpx{TURBaQ(!P7i1=oA@B4P>8wu#ek zxZHJqz$1GoJ3_W^(*tZqZsoJlG*66B5j&D6kx@x^m6KxfD?_tCIgCRc?kD~(zmgCm zLGhpE_YBio<-2T9r;^qM0TO{u_N5@cU&P7is8f9-5vh4~t?zMqUEV!d@P{Y)%APE6 zC@k9|i%k6)6t2uJRQQTHt`P5Lgg%h*Fr*Hst8>_$J{ZI{mNBjN$^2t?KP8*6_xXu5xx8ufMp5R?P(R-t`{n6c{!t+*z zh;|Ek#vYp1VLf;GZf>~uUhU}a<>y*ErioacK@F{%7aq0y(Ytu@OPe;mq`jlJD+HtQ zUhr^&Zeh93@tZASEHr)@YqdxFu69(=VFRCysjBoGqZ!U;W1gn5D$myEAmK|$NsF>Z zoV+w>31}eE0iAN9QAY2O+;g%zc>2t#7Dq5vTvb&}E*5lHrkrj!I1b0=@+&c(qJcmok6 zSZAuQ496j<&@a6?K6ox1vRks+RqYD< zT9On_zdVf}IStW^#13*WV8wHQWz$L;0cm)|JDbh|f~*LV8N$;2oL|R99**#AT1smo zob=4dB_WB-D3}~I!ATFHzdW%WacH{qwv5Go2WzQzwRrv)ZajWMp{13T_u;Rz^V-VF z@#62k@#FD#t@v9ye*A%@ODWm-@oM_$_3Cy1BS+(+ujzNF@8a7?`$B^{iX2A-2_nA? zfi2=05XV^;D_2G}Up$eFW|Ofb^zuE)bWHkXR4Jm!Sz0O?)x6QD^kOufR`*v0=|sS?#*ZCvvr^VkV!zhLF3}FHf%+=#@ae1Qq<4~Y1EGYK$Ib1 zg!s~&&u27X&4Ks^(L3%}Npx!_-A)We=0v#yzv03fzxKZ8iV6KIX5U&?>^E?%iIUZ4 z2sD^vRg%kOU!B5@iV{&gBNc9vB)i{Wa@joIa2#4=oAl|-xqj_~$h33%zgk*UWGUV# zf3>{T#2buK?AZH?)h>10N)#VHvOV}%c|wR%HF|pgm8k`*=1l5P8ttZ1Ly@=C5?d9s z)R>B@43V`}=0??4tp?Y}Ox0$SH)yg(!|@V7H^}C-GyAXHFva04omv@`|LCuFRM2`U zxCM>41^p9U3cR>W>`h`{m^VWSL0SNz27{ske7TN1dTpM|P6Hn!^*}+fr>rJ*+GQN{ ziKp9Zda}CgnbNv#9^^&{MChK=E|Wr}tk?tP#Q?iZ%$2k;Eo9~}^tmv?g~PW^C$`N)|awe=5m{Xqd!M=ST?2~(mWjdOsXK#yVMN(qP6`q#tg+rQexf|*BeIU)a z^WuJyPR4WVsATp2E{*y77*kZ9 zEB{*SRHSVGm8ThtES`9!v{E``H)^3d+TG_?{b|eytE1cy^QbPxY3KFTWh&NZi`C?O z;777FMti@+U+IRl7B{=SCc93nKp`>jeW38muw(9T3AqySM#x@9G|p?N;IiNy(KN7? zMz3hIS5SaXrGqD(NIR0ZMnJT%%^~}|cG(Ez!3#)*o{{QjPUIVFOQ%dccgC0*WnAJW zL*1k^HZ5-%bN;%C&2vpW`=;dB5iu4SR48yF$;K8{SY`7mu6c z@q{10W=zwHuav3wid&;5tHCUlUgeVf&>wKuUfEVuUsS%XZ2RPvr>;HI=<(RACmN-M zR8(DJD^lePC9|rUrFgR?>hO#VkFo8}zA@jt{ERalZl$!LP4-GTT`1w}QNUcvuEFRv z`)NyzRG!e-04~~Y1DK>70lGq9rD4J}>V(1*UxcCtBUmyi-Y8Q$NOTQ&VfJIlBRI;7 z5Dr6QNIl|8NTfO>Jf|kZVh7n>hL^)`@3r1BaPIKjxrLrjf8A>RDaI{wYlKG)6-7R~ zsZQ}Kk{T~BDVLo#Zm@cc<&x{X<~boVS5(zfvp1s3RbASf6EKpp>+IFV9s`#Yx#+I& zMz5zL9IUgaqrnG*_=_qm|JBcwfl`bw=c=uU^R>Nm%k4_TeDjy|&K2eKwx!u8 z9&lbdJ?yJ@)>!NgE_vN8+*}$8+Uxk4EBNje>!s2_nOCtE+ie>zl!9&!!I)?QPMD&P zm$5sb#Le|%L<#tZbz%~WWv&yUZH6NLl>OK#CBOp{e~$&fuqQd03DJfLrcWa}IvMu* zy;z7L)WxyINd`m}Fh=l&6EWmHUGLkeP{6Vc;Xq->+AS`1T*b9>SJ#<2Cf!N<)o7Ms z!Gj)CiteiY$f@_OT4C*IODVyil4|R)+8nCf&tw%_BEv!z3RSN|pG(k%hYGrU_Ec^& zNRpzS-nJ*v_QHeHPu}Iub>F_}G1*vdGR~ZSdaG(JEwXM{Df;~AK)j(<_O<)u)`qw* zQduoY)s+$7NdtxaGEAo-cGn7Z5yN#ApXWD1&-5uowpb7bR54QcA7kWG@gybdQQa&cxCKxup2Av3_#{04Z^J#@M&a}P$M<((Zx{A8 z!Ue=%xTpWEzWzKIhsO_xc?e$$ai{S63-$76>gtB?9usV&`qp=Kn*GE5C&Tx`^uyza zw{^ImGi-hkYkP`^0r5vgoSL$EjuxaoKBh2L;dk#~x%`TgefEDi7^(~cmE)UEw*l#i+5f-;!v^P%ZowUbhH*3Av)CifOJX7KS6#d|_83fqJ#8VL=h2KMI zGYTbGm=Q=0lfc{$IDTn;IxIgLZ(Z?)#!mln$0r3A(um zzBIGw6?zmj=H#CkvRoT+C{T=_kfQQ!%8T;loQ5;tH?lZ%M{aG+z75&bhJE`sNSO`$ z`0eget1V7SqB@uA;kQ4UkJ-235xxryG*uzwDPikrWOi1;8WASslh$U4RY{JHgggsL zMaZ|PI2Ise8dMEpuPnW`XYJY^W$n>4PxVOPCO#DnHKfqe+Y7BA6(=QJn}un5MkM7S zkL?&Gvnj|DI!4xt6BV*t)Zv0YV-+(%$}7QcBMZ01jlLEiPk>A3;M^g%K=cNDF6d!7 z zq1_(l4SX+ekaM;bY|YgEqv2RAEE}e-Im8<@oEZ?Z81Y?3(z-@nRbq?!xD9Hyn|7Gx z-NUw`yOor_DJLC1aqkf2(!i=2$ULNfg|s8bV^xB!_rY+bHA;KsWR@aB=!7n&LJq(} z!pqD3Wkvo-Goy zx1edGgnc}u5V8cw&nvWyWU+wXqwinB#x7(uc>H44lXZQkk*w_q#i2O!s_A?a*?`Rx zoZW6Qtj)L1T^4kDeD7;%G5dS816OPqAqPx~(_-jZ`bo-MR_kd&sJv{A^ zs@18qv!kD;U z5Evv$C*bD~m z+x@>Oo>;7%QCxfp-rOkNgx4j-(o*e5`6lW^X^{qpQo~SMWD`Gxyv6)+k)c@o6j`Yd z8c&XSiYbcmoCKe+82}>^CPM+?p@o&i(J*j0zsk}!P?!W%T5`ppk%)?&GxA`%4>0VX zKu?YB6Z)hFtj@u-icb&t5A1}BX!;~SqG5ARpVB>FEWPLW+C+QOf~G-Jj0r`0D6|0w zQUs5sE6PYc)!HWi))NeRvSZB3kWIW|R^A%RfamB2jCbVX(Fn>y%#b1W%}W%qc)XVrwuvM!>Qur!Ooy2`n@?qMe3$`F2vx z9<=L}wP7@diWhCYTD?x)LZ>F6F?z8naL18P%1T9&P_d4p;u=(XW1LO3-< z`{|5@&Y=}7sx3t1Zs zr9ZBmp}YpHLq7lwu?CXL8$Q65$Q29AlDCBJSxu5;p0({^4skD z+4se#9)xg8qnEh|WnPdgQ&+te7@`9WlzAwMit$Julp+d80n+VM1JxwqS5H6*MPKA` zlJ*Z77B;K~;4JkO5eq(@D}tezez*w6g3ZSn?J1d9Z~&MKbf=b6F9;8H22TxRl%y1r z<-6(lJiLAw>r^-=F-AIEd1y|Aq2MggNo&>7Ln)S~iAF1;-4`A*9KlL*vleLO3vhEd(@RsIWp~O@>N4p91SI zb~+*jP?8B~MwmI0W$>ksF8DC*2y8K0o#te?D$z8nrfK{|B1L^TR5hlugr|o=-;>Yn zmL6Yt=NZ2%cAsysPA)D^gkz2Vvh|Z9RJdoH$L$+6a^|>UO=3fBBH0UidA&_JQz9K~ zuo1Z_(cB7CiQ}4loOL3DsdC<+wYysw@&UMl21+LY-(z=6j8fu5%ZQg-z6Bor^M}LX z9hxH}aVC%rodtoGcTh)zEd=yDfCu5mE)qIjw~K+zwn&5c!L-N+E=kwxVEewN#vvx2WGCf^;C9^mmTlYc*kz$NUdQ=gDzLmf z!LXG7{N$Mi3n}?5L&f9TlCzzrgGR*6>MhWBR=lS)qP$&OMAQ2 z`$23{zM%a@9EPdjV|Y1zVVGf?mINO)i-q6;_Ev|n_JQ^Zy&BnUgV>NbY9xba1DlY@ zrg$_Kn?+^_+4V4^xS94tX2oLKAEiuU0<2S#v$WSDt0P^A+d-+M?XlR**u_Xdre&aY zNi~zJk9aLQUqaFZxCNRmu*wnxB_u*M6V0xVCtBhtpGUK)#Dob6DWm-n^~Vy)m~?Yg zO0^+v~`x6Vqtjl4I5;=^o2jyOb~m+ER;lNwO$iN ziH4vk>E`OTRx~v#B|ifef|ceH)%hgqOy|#f=Q|VlN6i{!0CRndN~x8wS6Ppqq7NSH zO5hX{k5T{4ib@&8t)u=V9nY+2RC^75jU%TRix}FDTB%>t;5jpNRv;(KB|%{AI7Jc= zd%t9-AjNUAs?8m40SLOhrjbC_yZoznU$(rnT2);Rr`2e6$k!zwlz!d|sZ3%x@$Nw? zVn?i%t!J+9SF@^ zO&TGun2&?VIygfH5ePk|!e&G3Zm-GUP(imiWzZu$9JU)Wot`}*RHV<-)vUhc6J6{w&PQIaSZ_N<(d>`C$yo#Ly&0Sr5gCkDY(4f@fY5!fLe57sH54#FF4 zg&hda`KjtJ8cTzz;DwFa#{$!}j~g$9zqFBC@To^}i#`b~xhU;p{x{^f1krbEFNqV^ zEq5c!C5XT0o_q{%p&0F@!I;9ejbs#P4q?R!i$?vl3~|GSyq4@q#3=wgsz+zkrIB<< z=HMWEBz?z??GvvT54YsDSnRLcEf!n>^0eKf4(CIT{qs4y$7_4e=JoIkq%~H9$z-r* zZ?`xgwL+DNAJE`VB;S+w#NvBT{3;}{CD&@Ig*Ka2Acx)2Qx zL)V#$n@%vf1Zzms4Th~fS|(DKDT`?BKfX3tkCBvKZLg^hUh|_Gz8?%#d(ANnY`5U1 zo;qjq=5tn!OQ*-JqA&iG-Tg#6Ka|O64eceRrSgggD%%QBX$t=6?hPEK2|lL1{?|>I^Toc>rQU7a_`RSM^EPVl{_&OG-P;|z0?v{3o#pkl zC6Y;&J7;#5N#+H2J-4RqiSK^rj<_Z6t%?`N$A_FUESt{TcayIew5oWi=jxT*aPIP6 z?MG`?k5p%-x>D73irru{R?lu7<54DCT9Q}%=4%@wZij4+M=fzzz`SJ3I%*#AikLUh zn>k=5%IKUP4TrvZ!A{&Oh;BR}6r3t3cpzS(&|cEe&e{MQby|1#X`?17e9?|=i`sPG zL|OOsh`j@PD4sc6&Y3rT`r?-EH0QPR*IobE@_fkB8*(886ZkjkcO{K8Sz$H`^D-8P zjKG9G9A`O!>|!ivAeteRVIcyIGa#O<6I$^O7}9&*8mHd@Gw!WDU*@;*L;SYvlV#p( zzFSsPw&^UdyxO}%i)W8$@f}|84*mz&i2q@SlzMOd%B!BHOJ<(FYUTR(Ui$DuX>?85 zcdzl5m3hzFr2S@c_20C2x&N)|$<=RhzxI!}NN+yS16X^(_mtqY)g*Q%Fux5}bP3q$ zxQD|TB{+4C1gL>zI>g~-ajKMb{2s_cFhN2(I(q^X!$H(GFxpc6oCV9#maj|OhFZaI z;umX6E*fQVTQ@lyZauuv>%E)5z-?zQZne18V5A}}JEQmCz>7^h0r)!zhinBG6 zMQghGt!Do5h%HmAQl~%m+!pr-&wlrcwW;qw)S$6*f}ZvXd;cHw=xm|y~mHbT3yX>?hoYKfy--h+6w9%@_4ukf0Et^zr-DbPwFdyj0VJHi}4bqRetSNR`DoWd( z(%n5>8MQl+>3SeL-DB@IaM{NDwd{{v_HMIO)PKO}v{{##c@ihB0w$aaPTSP4^>n3Z zC8Il%(3dCLLX$-|SwWx1u7KVztXpzNhrOZQ78c$jd{B9lqsNHLr*9h;N9$i+vsrM1 zKzLB_gVdMCfxceejpIZat!MbR)GNZ%^n|fEQo?Xtq#Qa_gEWKTFxSL4b{g}kJNd{QcoQ}HUP-A)Rq;U(***IA*V_0B5mr}Xp$q{YSYs-b2q~DHh z?+muRGn~std!VXuT>P9TL_8Km9G{doqRb-W0B&%d> z^3@hs6y5jaEq%P}dmr(8=f}x~^ z*{I{tkBgYk@Td|Z{csd23pziZlPYt2RJW7D_C#&)OONEWyN`I19_cM;`Aa=y_)ldH z^co(O-xWIN0{y|@?wx@Y!MeVg3Ln%4ORu5~Dl6$h>AGSXrK3!pH%cpM?D|6#*6+A# zlsj;J0_~^?DHIceRC~0iMq)SJ&?R&if{fsdIb>y;H@M4AE`z8~dvz)(e}BqUWK^U~ zFy`PX+z*Bmv9VxAN;%CvMk(#kGBEMP;a-GgGZf~r$(ei(%yGqHa2dS3hxdTT!r>La zUrW2dCTZ!SjD_D(?9$SK02e_#ZOxdAhO%hgVhq54U=2$Hm+1^O^nH<>wS|&<)2TtD zN_MN@O>?A@_&l;U)*GY*5F_a~cgQb_3p`#77ax1iRxIx!r0HkDnA2G*{l|*}g_yI% zZdHt2`Hx^MA#VH7@BEN68Y_;sAcCNgCY7S&dcQsp*$+uW7Dm@$Vl7!YA^51bi} z*Vy8uTj{neIhIL|PhditfC1Jeub(uy}w|wV5 zsQz)04y;BY2$7U4$~P{k)b`hZb>gv1RkD)L#g~$*N^1N1GfNMS)4r|pT*V<&KE1M9 zTh}rzSW#Kcci_#(^qf0gTW3&QN&zsW%VAQ+AZ%-3?E)kMdgL)kY~@mC>l?RH28u;Y zt-@_u^5(W>mDdtqoe){#t;3NA7c@{WoY9bYFNoq+sj&ru;Z`x>4ddY0y*`HRtHFEN% z@mFkp=x0C6zDGgA0s|mP^WNEwE4O}S?%DOtce3At%?ThxRp@`zCH6MyzM)dA9C7IP zI}t;YUV(Jcnw$4LoD4H(EM#!{L-Z|&fhNYnBlKcQ$UScR#HH>scYBTf2u|7Fd8q$R zy5Cbt=Pvf^e}m4?VVL@#Pi3z*q-Q0MG8pGTcbS|eeW%R5bRzKsHSH#G(#$9hj9}0O7lXsC zbZ7#UjJM^FcvdKK3MOEl+Pb-93Px}F$ID&jcvZdJ{d(D)x|*`=vi%1hdg(dd-1E>& zoB4U&a${9!xyxoT%$7gFp{M<_q z9oVnk*Dcp$k#jA#7-pZbXd=L8nDhe<*t_*%gj^Vx>(~KyEY~i&(?@R~L_e^txnUyh z64-dU=Lc;eQ}vPX;g{GitTVZben7||wttapene^dB|oSGB~tmAGqE^`1Jxt$4uXUL zz5?7GEqvmLa{#mgN6la^gYO#}`eXyUJ)lFyTO8*iL~P z$A`A_X^V#!SJyU8Dl%J*6&s9;Jl54CiyfA`ExxmjrZ1P8E%rJ7hFCFo6%{5mRa|LY zk^x76W8M0tQBa1Q(&L`|!e zrczv>+#&b2bt zuD1Bfoe>oW0&!ju$-LI)$URptI!inJ^Dz|<@S1hk+!(n2PWfi-AMb5*F03&_^29MB zgJP7yn#Fw4n&Rod*>LlF+qPx5ZT$80;+m*0X5ffa3d-;F72#5un;L$}RfmR5&xbOf(KNeD|gT1x6bw5t;~j}(oMHcSzkCgcpbd>5UN z7e8CV*di9kpyJAo1YyE9XtfV1Q8^?ViwrKgtK$H60 z%~xgAifVV#>j>4SN10>bP9OV9m`EA-H{bzMimEQ_3@VZH%@KZzjDu` zRCG*Ax6B^%%dyLs2Cw{bePFWM9750@SIoZoff4mJvyxIeIjeZ{tYpbmTk4_{wy!_uygk4J;wwSiK&OpZWguG$O082g z^a3rw)F1Q!*)rNy!Sqz9bk0u-kftk^q{FPl4N+eS@0p1= zhaBFdyShSMz97B%x3GE|Sst~8Le6+?q@g6HwE1hJ#X)o^?{1!x-m`LlQ+4%?^IPIo zHATgqrm-s`+6SW3LjHB>=Pp{i<6FE#j+sX(Vl-kJt6sug<4UG9SH_|( zOb(+Vn|4R4lc8pHa-japR|c0ZAN$KOvzss6bKW^uPM$I$8eTr{EMN2N%{Yrl{Z`Y^ zaQ`-S_6omm((Fih26~Bjf^W$wm1J`8N+(=0ET@KFDy;S%{mF@!2&1UMxk>jTk49;@ z*g#0?*iga;P7abx1bh^d3MoAy*XQp{Hl*t(buU@DamDmvcc;5}`ihM!mvm36|GqRu zn*3}UmnOSUai6mM*y&f#XmqyBo>b=dmra`8;%uC8_33-RpM6;x`Rrc0RM~y9>y~ry zVnGanZLDD_lC%6!F%Jzk##j%?nW>JEaJ#U89t`?mGJS_kO5+5U1Gh;Lb3`{w<-DW; z;USPAm%*aQJ)UeYnLVb2V3MJ2vrxAZ@&#?W$vW)7$+L7~7HSzuF&0V95FC4H6Dy<( z!#o7mJKLMHTNn5)Lyn5l4oh2$s~VI~tlIjn09jE~8C#Ooei=J?K;D+-<8Cb>8RPx8 z-~O0ST{mOeXg+qjG~?}E8@JAo-j?OJjgF3nb^K5v>$yq#-Ybd8lM^jdru2WE-*V6W z>sL(7?%-Qu?&?wZNmmqdn?$FXlE!>2BAa^bWfD69lP0?L3kopYkc4>{m#H6t2dLIEE47|jcI$tEuWzwjmRgqBPkzk zM+(?6)=);W6q<2z95fHMDFKxbhPD-r0IjdX_3EH*BFL|t3))c7d~8v;{wU5p8nHUz9I?>l zVfn$bENo_I3JOh1^^ z+un~MSwCyixbj%C?y{G@G7mSZg_cf~&@djVX_vn8;IF&q?ESd=*AJHOJ(!-hbKPlb zYi-r+me!ezr_eCiQ&SetY;BocRokkbwr=ONGzW2U@X=AUvS^E9eM^w~aztd4h$Q&kF;6EJ1O*M7tJfFi}R1 z6X@asDjL5w+#QEKQE5V48#ASm?H7u5j%nDqi)iO@a1@F z*^R+bGpEOs#pRx9CBZQ}#uQa|dCH5EW%a3Xv1;ye-}5|Yh4g~YH5gI1(b#B|6_ZI; zMkxwTjmkKoZIp~AqhXp+k&SSQ)9C=jCWTKCM?(&MUHex;c3Knl(A%3UgJT_BEixIE zQh!;Q(J<0)C`q0-^|UdaGYzFqr^{vZR~Tk?jyY}gf@H+0RHkZ{OID|x;6>6+g)|BK zs6zLY0U>bcbRd6kU;cgkomCZdBSC8$a1H`pcu;XqH=5 z+$oO3i&T_WpcYnVu*lchi>wxt#iE!!bG#kzjIFqb)`s?|OclRAnzUyW5*Py!P@srDXI}&s2lVYf2ZCG`F`H-9;60 zb<=6weckNk=DC&Q6QxU*uJ9FkaT>}qb##eRS8n%qG`G9WrS>Xm+w)!AXSASfd%5fg z#fqxk(5L9@fM};~Gk^Sgb;7|krF-an$kIROPt4HLqq6+EL+62d@~4Hsy9nIU?=Ue4 zJ69;q+5+73nU|TQu}$>#v(M&Vx1RD=6Lu`d?>zHN?P7J&XWwsvwJt|rr?CZu+l>m4 zTi^VLh6Uu2s392u(5DLaM%)Dr$%h3hRB>V7a9XG`B{ZsWgh4IyTO9R~TAR^h^~>ko z(k|Hy#@bP}7OyN92TKE%qNZfyWL32p-BJf1{jj0QU0V`yj=tRospvSewxGxoC=C|N zve$zAMuSaiyY)QTk9!VmwUK&<#b2fxMl_DX|5x$dKH3>6sdYCQ9@c)^A-Rn9vG?s)0)lCR76kgoR>S;B=kl(v zzM}o+G41dh)%9=ezv$7*a9Mrb+S@13nK-B6D!%vy(}5dzbg$`-UUZJKa`_Z{*$rCu zga2G}o3dTHW|>+P_>c8UOm4Vk-ojaTeAg0-+<4#u-{>pGTYz(%ojZ`0e*nHo=)XZS zpp=$zi4|RBMGJDX{Db?>>fq71rX3t$122E;cJ(9elj+kBXs>3?(tq=s*PeL^<(M$8 zUl;u9e6|EP5Us-A>Lzvr+ln|?*}wt;+gUmd>%?@Wl@m%Qm{>Q0JqTcxtB`ROhd6TB z$VY<7t$^N6IC(s*Z@x2?Gi%eB8%(hYaC zKfY5M-9MeR-@5h zZ?V`qr%%FlPQlW5v_Bp^Q?^)S*%Y#Z$|{!Lpju=$s702T z(P}foXu(uuHN!cJRK*W-8=F*QlYB*zT#WI-SmQ_VYEgKw+>wHhm`ECQS`r3VKw`wi zxlcnn26L*U;F-BC9u{Csy#e%+2uD$He5?mc55)ot>1w`?lr$J zsrI^qGB@!5dglADaHlvWto@|S>kF5>#i#hCNXbp*ZkO$*%P-Sjf3Vc+tuFaJ-^|Ou zW8=}1TOlafUitnrTA2D0<3}&zZz^%y5+t2`Tk`vBI93FqU`W!zY;M%AUoN1V1-I2I zPTVFqaw3Pr-`5HcEFWuD?!8Ybw)Y>g7c0tt=soTHiEBxlY;RlQ`iYY-qdd94zWjyD zFcskM^S{_!E?f3mEh9waR7tb6G&yl%GW%e&Sc5i;y@N)U5ZFLcAsma^K?Cg^%d{PO z=SHQq4a|l`AakzEY;A{n6Rn1u`7v~#ufV*6GZ$`Ef)d2%6apsU6^>QJl0@U& zq|wIBlBAgf0j!YaozAgmhAy0uy;AjRA2%(!`#&e>`V` zg`MfSf5gWvJY#?8%&|`Aj0<@aZ;-q#tCx=-zkGE|_C4)TqKjr-SE6po?cX?Z^B%62 zdA!75;$my<*q)n@eB<^dfFGwRaWB25UL#~PNEV>F^c+e2Be*Df(-rIVBJo2o*an$1*1 zD$bsUC-BvObdmkKlhW<59G9{d=@bAu8a05VWCO=@_~oP=G3SmO91AK_F`#5 zwXLRVay<~JYok|rdQM-~C?dcq?Yfz_*)fIte zkE_g4CeLj1oza=9zH!s!4k%H@-n{6aB&Z;Cs8MK?#Jxl`?wD>^{fTL&eQHAQFtJ_% zNEfs|gGYh+39S{-@#MrPA!XpgWD;NLlne0-Vey1n0?=ww18{L)7G|$1kjI(sjs z@|alUMcx*04*>=BWHv_W-t=rCAy0q6&*;kW&ImkwWTe$lzHJRZJ{-{ zl-mK6+j}V`wobm^^B&2Tl?1r=yWbz;v-F<#y!(CT?-4K(($wWtmD631MN9?trDG zMI7;9U7|UsC;urLP%eH1h%U`LJxT3oM4=gpi%X@lpVR9N6Q(uhJ00RWXeL-Z*V(O8 zsIyyVUvf=RXLBKX`!peifjIMvMs1YT0n$0*B;K^yZf&HN8$N%e=EgOejqihLPBT|< zs)z`nNU}BOdT7wYLy}R10eXUksn9o)jG)&=qteGc|XNI~h5R6UBfaPeIHbA32@*>orZsCB4`Q79}A=z@najfekt-_eTg7a}Mcas^D1ELlN6(y28c{ur|tmueFvIDOQxXs1)_lKrA`L2-^^VNC#miFvO%l6w5uK2bFyu?hyNLCjTCNRRVW^i+GX``giwc&TpV~OHu(yN&o)r2$K$1kjh@>iP z^&`?sCk#?xdFX+ilAb(;I7<$BQ#6j*jKsu%LEhQKe=>ki^ZICepr3#_2#pE`32i4Z zu%eXsgL)3x3Q-^OPPRhm<^!TEPoek6?O^j+qLQ*~#TBw4Aq~M2>U{>{jfojVPADAi zurKpW{7Ii5yqy6_1iXw3$aa!GLn|$~cnvQnv7{LMIFn!&d6K=3kH8+e90Zq5K%6YfdLv}ZdQmTk7SZ7}>rJ9TW)6>NY{uEZ zY^9PI1UqUFm|h0Vqe60Ny=wCFBtKb zXtqOa3M?2OEN=zDX7z}2$Y{2@WJjr?N`auMDVG9kSH~FjfJRNfsR@yJQp4cQ8zaFkT4>5XQqSVt5c}`-A#Z=3-_mGZ^)Hqayei zhJ}wgZ5UDln%)!;Wz@u=m(6C_P@r9*IMPe7Db`CSqad3ky-5-EcG=*v8J&{RtLJ(E zw2h-ghGYcDtqj4Z^nU7ChgEXO0kox=oGaY;0EPqeW89T6htbZg4z!uU1hi;omVj+3 z0B%$+k$`oH5*SeoG`Ay&BAA%nAUjQxsMlNdq8%;SbEAPVC#qm!r7j75W=A)&a6)3% zdQq$fCN;@RqI!KPfl9l=vmBFSFpD1cAxb@~K-$ZIlIL3W}?#3+|2p{|vZVq`YA zMbx|Xl57kJVwoetAo+opiewCkCIO=uBLEaG+!0U$MRdReNsx>+PIJWN6dW)pfeZ(u zQ8ei-Ht69)ZV`qv=vmorhOkF)Squ;)8AUfh<7A_xI8FGHMRW>~%o`1Wt3|8IMrM%& z8)|@=#ssro9=f9HtN0F#O085{Bf6PJnurfzS_yg?qqszmnQIYDP{N=xqPfvl;VNsK^qpoy2&App~Fe(MB7KCI)$p1!&YEB&%$9gTk zmvlt?t7!>_paNt_fYJvw^~LCqX{4opLy!n)md7}<_s?`gytfSAdoScQWTy&Tbr&~( zg9myGVv)l|4-umFBL0)Y(d}Rvt11)(O4ij#zeao~K$vh~JDn0_@3RjP2M0|79T&9+ z?>Vx&M30Sb15&<{RtpeYUf|n7n5GHyc+-FtA=7H$p6Mh=&M0O!so)tze7#WT>pp|x zfWae>0++DfscU2%>|@oiCQj+6O827)1}KsN^a>NSI*4?#ylfG-{q?3MMXX$dUH^S6Ni=Ve1d0(janpz@WqGJ?cG&sewpq294Qa zL{huwuoARdt5F4Dbh#?<2ruzSS{VeDAOtY+52t^xJW=!(0f3P&G3Cs^%~Q~~Wq{YA z!QrEk#>oXK{sc&Z7VB1_>fA1^#YyU1Ff<^9G(!V0!JW`n@EDdj$$2SVK6*7$!BvXP zmAC;h-W75(Nnzpro3CE9eV=~Lp7yS(vXnk@$g3{R`!(UG013==W*Hj{-*F!ujl+np%IX?E0*I&-K^u zY1z1I!`iOu+Ll`UtL|F6Vb?~vk=x9w6}eE^*<)O?pZQ#8YKE#b($x>w$3E*F0Kfk zfnyCo#zOpX1(P2yeHG@fP7}}~GB|&S27%6=@G^V=rmeTB$(w9rC6J@uQmcAMq zQ=Ce?Z0RkF_gu30<;5#jEW32il2?}$-6PZ?au16Y)?kUFy3L?ia1A@%S3G-M`{qn8 ze+|6jh0vqfkhdSb0MvIr!;;*AL}QX^gkc+q0RJ4i9IyOo+qAyHblI+$VuZ3UT7&iIG7640a)fe&>NOVU@xZ*YE`oy!JGMY%j}bGq!= z`R5xY(8TK&AH4b6WoKCo>lPh6vbfu1yYy02g^t9bDbexN!A`*$M5`u&}WqF?+*m?ZoW85&MFmXqQ1J{i;_Oz>3*#0?lWa zf?{tv`_JzP7D3x2gX&ICRn(aR$#>;ciH#pO?<*}!<}cYh_r{hb6*kkXSteV>l9n6i zwx63=u%!9MdE>@2X)3$YXh=DuRh~mN2bQFEH&_nHWfU{q+4=t07pt+Jfj90Or;6JX{BCQrE8bZe&wi3fwEXHRp zz8{VAmxsWU)3nT;;77X7@GCm7_fL1p_xKEG&6G~luO;Bc3ZIa?2b(*uH7qJ!es71c z{Buj4(;Jds$o78u<3df_2~DLq`e9*$SGmrR9p2OoVB5Q(KL3M{1>eq+;+lHK9N?xvyBPHni<#j$sZK{QrKEcdR9+eQD0V? zGPaq!#<-c#a>t4bt+R#Hu_|}dlIGeve@SR!d((u)Ga45+BuhHfA88G0cPrw>>(`ID zZ;aIyn|qmhuDXBthoW{J(WN+`Yud=y(wvd0rm&1*4>6?#8&)Fz z&@V=a0w4)F{^!&W_l6<5xg|-0F!~>aCALbeVsZTd*)M*^tr*!)O8w)mzKThWyQW@X zw%BFs5_@CIic5EPcTJu8=CmynV;``)3}gJ`Vl#VY_3Yib@P-KvBk_%!9OVu#8tG|Nc4I~A>8ch-~X%M@!>yk~ERI|QEcwzgI66IaaY>gx0~lm<@f z5-k^OY#SGC80Yr-tDRP(-FEJ{@_4LHsGJ=)PKZ@`eW75-r0ylN%0Q>&*M;@uZLdJ$ z)rw7Dt5ajr;P;~1P>jID!><(7R;w|Yf}qI&8klT?1dTfc@us5mKEe;qw;YKR(cp-D z6NmUMP8x7cM%~ytE@l*Mp^oN*mCF`gRNhw3gpO1PVi_^JzCJo>#mX(q+iJ(Ts$5=! z13b45gILEULS!=)SmZ{qsC1)$8-4eADGR?v z>~4k_SvdvPHAC}=4(!I^OLgQ@9EMDE7d$PvJbi+K%-HTh`P0#Ea|Jm6zj> z?R)(YWtZoIRx>AqzlG1UjT@6ba>yE z{Wf<5moh^-hu;ptAtPG}`h$4PWcOn>vy`#bH#Ss>OoAEE1gIbQwH#eG8+RHG0~TJ$ z>`C`c7KyM^gqsVNDXxT|1s;nTR&cCg6kd<-msrdE5Ofk=1BGDMlP2!93%0c@rg~4` zq)UFVW%s|`xb>;aR@L^*D>nkSLGNmM?cv)WzHZy3*>+*xAJSX;>))*XRT0r9<#zIpug(}{rSC9T$42@gb zy8eb6)~}wl<=or)2L}4T{vum>-g)QaKjtnp5fyd^;|BxHtx~2W^YbKq1HfB7@>Hw@U5)?b^H=uNOpli?w6O#~V`eG;`irLcC(&Uxz`L_Cl zS8r24e*U71o@dV6Soupo-}Ttu*Dk&EwY`h4KdY-k55DSqR&o7nufO)%>%s-Es^5Q_ z60#cReEy=$4|nW)bLh=|4bxW4j}A?qOle+wjn88oAeYb~!eA+EQ;8Ggp-UldAt$3M z7*E590amz>YB9L(z?Xx&?I37XYw?Os-t+05x6Z4vkzBE6-hrbB=GAB?p{DQXV4CKg zls@_wh*&XC<3R(CEZxg8*Y(6a>cIOq9Nss7{=UQ7Nv%O_WxSyBqnH{@(<>A&2on@z zn57W4Dh*E)o#rJ2#tyxV2;C5#rl8%%As$4qB=IbMt-z|jnWi>>7Ymq37;AW!6Y4nx z1Ogx#!WVdA92mEipgUxzy_?ddg|x)KOCyK)P5v@usc;0sN3{=0slt4CuwaxK@20eO zhdp~Z8iJ7GWrkq_-X`~(eBpthn9|`tZEUCIGiFpJjjxPVE9I)#z3Q$3tw`a69qxjuf+~ z*?v>d5~pcH-AQ~0)8PyIjumD^?SM8!Wb>KZoD7hOlc2nA0_(eG!in>}Ru}>6)>5 z@*}T`Hw{I^-?PS9>(#UFBQpW72* zsfj(2+_9@5x+57aN!`e`f(Mp_I(D>}p8)@&g^g+X1%d{ z%X5boE?hEoj0CiwTh9)#8^?~;|wgor_=Z1BI9_dI{ z&t*f95n?ZgZ5CnQa!v(p|JT?y0%KKgi`Smi9k5r!+!Mkz=&Z$%CFl;?AOzV`YBKrY z0#Y6~J6&dA=m>T@TYb8ukaV4z^Z?VX*MCKcp13-ye1*`gAj_Tm@r{fpm?K!U@Xg2AfndEo6jZN} z=XK0GRNXVLW2c?}B)rH^yR>u}b?|p(W$!TkQTAgu1AIG>MFfNchMQB_^-AQxRE$Th5-E_tBP@v(Cy|ojjP5LEU|JrM8 zVF5;$>Hl^jlHWDPChrTH(vh%bARyj5#TPb>omAs-)4zN z9?9(wybd0$Z5s+}Fiytv}-8U`IC<{6U2_NqEAkv;7lys5Qcq3EKt z0-!^Xy3idllgZ~qX^QTe=i*oGUCJNk>Y26?+9U(Ks|C81S{-v+6ebc`c(yibQbuB% zxM7mk>}dI-TfUi5Jqdu6b`4SqF)y5humuCaHhssdcR(jKf5ZGprx;Oe7VG#G6TA1+ z8oZLl<+ey(L+$Qsck^4fi{I|)p15MX73gHFUU!l${lN{)Ht_Wb%j#UE6cZ9}Wq^>+1wz z9TBA@%f~tby^0YWafmn&8Ppjn1Ng{d;S01WImtMzV<`!zU7;+8e-Xko>qM^OfOZ`Y zEZG#vcm>EGF??&G6+v(3l`X(xMn8ESv=@LdMfdcxFi%g1?0HDPG>blldR`OLlWN80 zz<$t+MM9%1K~JT@#aBZjOu9*G{W$u7cqTM|&a1)0wR8R^*r$<&AhuCq1Z{-aUhc5P zdyaaK{$P=Y6R{40FrWmLbDOCijqB(1PrKlnL)Tm|t=l}toVLAZOXJ*~-dx|_A&o65 zskcpT@bs+d@ia`f)t8ivl{(t%H?O?;=^s3O^GXqopx7E3kz06f^UQq<>gyNmo4Ij; zrOxuzn{WOqP75~PwPXC;3mZ#YW1xy&DEXsl~)u4`-v_{*B%R6xNH3* zJElz8@d#i4`#JV(ko%x;u{LMqLEEDmwD*(ccB9Wp;u*9I?=sC7g>%L{%$4m#zhbjm z)gK{LWQvE1>_yl|4T$nYKNVZ<)vza7FKU5*W~4)KNgN@;SA<9&ERxIfA&UZnB=r%N z5YD4fY$9Mkzy}!G+`KUy>3l(FSi1 zw)t)*w$E4#ZSxfm3cZLC(o3aQQ7uHk>_@fMTHoM0=quh%mfN6%{`O($pyzg0kPf=2 zjA%M7bRl4BhV5{{d4HbnTh`HM&YKw@N~47e7NFGr*9Yzi(7XQl-FJb4hPEKOC!K2x$nWy>8=PJYE)T$=Cqe(n*ChZE zklF{Ms}h0Jd|@o;Gz(~b;9d&c#0O^j{1?tF5dtMj9dG`|j0qZi^aF1r{<7KC5hZ`E zNX2nxJYEr@>u86|tPjTDet;fLn1R+IOm6&3b*}TOyNpIaid@W9c9!jIfiJOgK-aw=xb5Kpb)`E9x%CU82 zEQg_v`e+tWYClJHl=_EsSW?LZO3)o#ox(#2UW9|V7I8fYnz5fRtph`u)dywWL9}UV z*hdU9-BBK5G&}j~O6&dSdWDIpFX;&Or5wNbm^Y+A-x6(K$$Of6JTVl9n0gFY&=T5p zZX?pCxA&w{J)eDSfb?Zh*LT#AdiPlB;A%p|-`Aw6RP2mYTh zLmL~zM^VS0V@*4LkOEG~nQR)HyRB+;*KWli%QqKt&%16HWyMXRhtwdCgyoTm*5#itgp(Wap66 zyr-dgKgjl&t?JLMuw}!Boz)TOa2|37p^FAcPmxX0apWmfp$B1WF_@-dsK+?1F6~yY zEwi!-))Q_CbOP%?p%bx|=d^nLBig-_$e!nh19^Ps`s{SNq{nnW)V-qnz3y+Ipd7HS zsb}z%!+}y8izoy>Nyyj4m_br&8TGFcze#gP4?v*NEdl zzGBLM4qpvdu;5vCFi9^zXU;sW`>pPi|NFD# ze=$xI@7q9B4WPsw4CAO~UJ(S)s@u41E>#9D>!?=*N5m$%^0E` z<0RjkAj02TN9RLX3Js+GArg=Nu>E5z zPa!vMuMV06#7$1dLbwv+VGT(5V_&A~Uy3T^+|y~Q2>lA|=hZZ)ex%G`rhkN54C5gq z>w?qN=A+LgB0-@s{OJs7Da|z%dK)uDH4?m5Y=K(N5KWL)uqDxwBt>QmOk(h~1u6_s z>9x>G_+@bJhBQ;(Rr?20>Tjn}^Y`|rQvI3Ua5$aGq{HFf4BhwAFVk2oHNbk)hmAri zjQ_!g*-c^AKM>A@je&H)i1PsJ5929F<8bLXvONK4;-n6d;Zm7Q=G|k6Fp*AY!b1a`eoS*c zF413z6`x;!NZV1k5)sv;-Dqjt?t&|JLNGSA2yWhU-RYC^oiWI1+idw;6*>m1&Io`^iPgF6c$sN zw9j3KFYs@%*HNz1Jr?F^RiLV%@DyQ^Dnc1h&59pWKhD#AMQV~3k7}>c@gdw=dyRf5 zHGNU7bA_hHWUnI-9SXtjM~LT>U5!uS#{ zKSOhB>l^nUa&S8kEFoAUIDG}(Lr#|uJCGb%29Xr>1S4yk0d)9hoJ7#4xNbi?5Dt?N zBp45evje1L)A;&Smy9J8MJe@1#HwBFoYPv$=k%GOaq!kd58)tzBI~EkGG3Rqy>GOTce-p>jH0rb~c(K z1|9q=$3)Vdgcwyvy&>S3p(f~O;~?XK{)Kch&2!gs=%kNH#-Ee-i}S+a@DNWR(Xnv< zv7kIUUD(c?RS|JmPeXBC6cbxUl6qRxl;fFAiK%!>EzFa zJ$-mz?G%WqC+P-l!DLX&nfxzGAnLaFsOg^Vq~gaW2QQ<(qixj#J=;Y{m`?kHkfO)i zdxQ*`2Jr3iXdj4QE%|AlQ;|Wx~pKrr7xuNnTe=t-AO)iha6xDYpH}>yZ z+FD^H2VS0x4us;Wo_95^kElZ$>j2HW@wyeLi3i%Q28NXxQT7V1{iHY}Llc~!Dkv8* zM><6X$}-pv0N#?+N%W`5%}K0Is%8kCOC~LuR6+;gtHYPi9=dqUoin~Q^MhE;TSIe$6dEI=Xs(`oTlj_C-3c4KT+wJvpu4Kkn_RZVg5jE+RF`XNx?0xmaV~bW?v}wVTXn4{5 zO&2X+*pF%!%qu@3SLRk-npU5?`f_cV9;|pa#ktlD9VuvRx;TK+fWUv_$vC8-@TcO4 zN_-D6?7|-4!VWMEgQ}TUe(c3w4{eyxe8C5t7pS0MFe;X@U&B?sVDIGR;u>?mPyb2F zV5WLiQ2mX&1v=E#B`oe9yk4Y2^CFRk8*rV6k1!uW{m47&7E!m%(ANz&+ixrB^ng(;#RLHnX%tfsjJWM- zyBo5Of=eNl8*;gm`ozE0weGdP7~Iz5$$pI`$C5 z`U46T|8cnpt;J+VO?%~H_`Ph??bcn%Jzu`2`z~tc^PoA?r znJlfFuxIeRC?a>J?C!EC2Bn;dnhn3XeZ}sbjb-10*a7A?aS00$P{m0wm zO_v_`nJOwO*k6S$tHR@xmt`N`;fR%l>^^ZvbfRm}PUBtryK5pTwRdIZgj<#_irORP zr7I?yj7m&+KkD(;PKtLXmF-s9=>`j_AFjI$YN7_w1g7hD(md1~ysZj9;u_Y4i3Ssz zgRH~g_UH9AHR4A!67Z@2zch=Odh*4WzWc2=ekK0-ueW&=xy{z7Gz9CSbv}Pk+4ST# z#ZxnW&!Z1tS0A}`@LT_*wh{sv=f-Dy+2cPoUi{nzYTGjx)eit9s#G5^D0+(|iNBlJ zV$vUX35MrZ8K19VAN|i75_}Z#DO`R~MZQy~2$6gqOvN0Js%d70SzJm|ER&Jy5k>-I z!fh9^fC*zr22w0EG6&Uqo`eqC7_L8gi(#?!A>;y86ak0F7|oHQIhmW!15hHkZ(*|o zF+vd5r!A(imA-b0}qc4-&FS58}j>!?PW$SEg*;W8H~a^e%b?2`O8 z*`i%!x17FmIo=X;^83K2Y3Hja(b_rMns6%ts^>=(bA-9V<9O1I>564?R3a}v1yYtH z*l6T7AY0T66-95WtZgaP8(}|MBGlfNdh@=~Y1m!IA7($BPUtE`qT@h@;M3Hd z;_dtQw^?1x7-WaPK4XDxuqd5+qVz|PQlALGw|x}&MFa4RtVSK`(e|RtFN=u%s&M?) z7+HD3$diG_iYZuX{0ijc(*2C7cTX)p*3LRRtn3r@wq>%<@A9jY)yX*dv zSq7pIH0)jCA$)wa^7RfPVlWXzzoH}vzHmu4?W&f|zEC#fi<;dYS!Z*G+=!O(wLx7} zkfS~!6{@R-(Uw86L(mJl7`6&&tfKDx<)c+WIlqL)3pSX=7*`N5ysyr`8ap$bd^E3w89)ZgPiCBi|f{Ji^U)|AMCk%95n_gVk3|_XmE_Z6(keo8NCgI|@0sfZs3_s1} z$KK|ZCF;AE#cQiOrv*z^HWTBHM`H8Hwdx20FDq8lu^{(Q!@5s%Urrmi_ZX=7)j%7* z2x#|wO+pMI^e#2DpLkU+erWUorFxiNlu1s>XIg^5wIEm|joek2Rd2IsPtNkBRLQTFsnoh4v_<(`f@uV0I_G*I9RD+?L~j{1bx`#0ta zEeZiTNBzhh^|GEN+1vl7{w)Wm!`yhLKAuC&Ve`GhjRo0c|E^`tZXfkQW;&_kBLS|M z7!XYb?!E&&=u`h5Ld{_dyivFMQHW{aI!yVS7oS=ttZ_4U4sb{P=wmO6wCrO3g8Cir zRxN0ht{}^=kNOy`2fdgiLzr_8?$^fWMSdbcHb<)&+4+$`i%$>mB*aF7fv0tiFWhcK zRThLy0Mtx?A6Q34Vn$tJOcHkv?-ldg8_%9Jr8YX#=C;}%u*pWq^?L5VVi61EUkC^@ zTi3LAgna%bC9aB?Qos0?XlUZtnp9cISx)1AbGeO~JGb1<*DpHId@iRrT4e7+!$h07 zWDZ4FAXQ;*hdB%9)8U`#Aq1XW1`G)sm$Ol@ZCv2#2r5~I^BXuYJm%NgOkCQOAufat z)Mo2&C`TDc7EDz1sE;V{`=Bx<#5gYrDb+@@FE3>Yx=pZB79-7UjD-g%Z#qc&td6cl zI`S1u2Q2b!m^1LOg{LEV_eV*@cFW|i{!+a94itA#8 z2;?I%3?C8LQn5B+Ac|?$1Ejde^`AH_B}3`>#H=np*@XDR^y^=fZDd~Fz;wS>e@!M7JaPvv zPU?=U|2$6iw_+;&j{0oiARgl1!2p}_PMTg!Yxs?H%{HmJgU62_ghA}_;}{7x*brZc z@>!rSz|M}1YPdKizI;?B3~2O%LY`8A1SF;-m z+Oxu{+PYOU-V9O}bVd$T!;AU2M<2*KtciMEC29!H9V-u9ZUJ$M-4#Nb$5QVy@LP8HyfiyK->WR(e1g77J;isq@ zxu$>@C(@*mf}RY@L8hJXBrWMOEKDqt3i8iwFSwpR$W>G_j=iMN>(!1>S7GdmXt%UH zpfdn%XxP3S<>d1=1{yBn9c@?(YZkyNN1 zQx^M4-32#mo8SKR;r8t_CV3=RwbSNzS!Jbd%GS0L=qT*0!ERw05x~DzSsUKHYQ||Y zuwKD!+2nux!l3~g>0-F=;qnW{w$F|jqXuhZz#N`4WtzLDj_MYvu(*X@fb3G;s!oPE z?QMW|e7J7#=?C#3QWQRp-~(1;_=?J(Y^}oNmHRoN$^y4Pv2Z8cL)EmwWVNJh@>2ER z)el6y-IQ`!2h2{kx3}jwTf$_!N75)(mi|n=?Ylj_>QzqjfMiO67Wc4{rOcF4JS+{j z&z%duf1`r(U@ZlI{F=sZFnCGJv}cN<(cA|5AP8m+HUK z@vG9%#_zOu)ChxFSxmKsBSSO9XX%g4SU79e4=G!|Cgo(;VeA8dsRxIZ$Eqhj(brh0 z>Jh)P2`<<#u_i^?L>%2jxXAxZX%?<7l073C+~1p!t{Dj_9ZxL$sz|_G{C#{Hv@t=B zP}EsMr62u$;U#=d%MRJHCiNv=5OI3(_o-A=G_9B~AsrRui@pzUDE@tHg#6PmWEuT^ ziPt|@8=kjTNmkqdOlyJS!m{E9I87hqn;%9rT0<0-L99QeURoyK-&OxH^mcao3^t~WeS^K zH`XC|VCLo6*duA78O!ugN@5Elxkhd!CmdSX&*f=utfmDFD9PkBHMk3&aFB&)R8NL4 zD&i)OQLO z(Z_o2Zs~o#^$zu`{XU~$I{T&vAH3;ofJ*ZpJ&JR~s{J0}8cw}`t#a3NvWA?#tMY67 zLG}{Q{#6^CipQ$*V2|W$g2v->Y9+4=(K+K`;I4$BFUb9!Nrk0B*fL+v z_lcdO1uEs@|8I@xoKCB{68@q=)}90JCVF33Lb?M@bC5mog<2~vPXXzk7B$|75Lya& zL)t=%E&Pk`S-PznN<)4iAI;NU!@f0_V&wOND{4!~b@1&pAN$Goqzvq>;o=lr=43Xx{tUtEaN3B>CWZ)Uac%%Y9--wFCA~Ek7aAC_APm}b zpXAnlNOIF+;t%pPlAxIkvv1neXa8*XxNLX6ZDDR(+U5bi-=^>US$+3TyUFaf{gSPI z&A@*!TUbRQ-p-3$KUDc=Hp9j|c+t%)Z{KNid2DyGia&p6lgtpOkDeM{Qy=)H&22V` zFBRKM=Etf98a&;o2pD`R2ctkyWxz`aTDZXBjY52aOspy*2=?xDIZi>&&))8y?Pe*( zt;DkFm|`@cFI!Kx=wFn7fh&cqy-f1RZb2KRCK7JNBsApYHWk=M5J&|wBQOdb+2_^g z*;b(s3o^wX$sWZHhUhNh^+UU2+hPaWw)eN~kHy66akHOp4#cDm_4zDetK1Mqx+sR1`nMz9wwQP*hL>=&Kei3+FtV>|yg%{T(6f`N5BR!MdXj8xHG^3) zqCJiEswQF>ZLP}3Hs3ciKciD63}0Z^MFL6+`V473sGm^=U1^Mx3`Y|Mrl>H0pEcT6 zg^H5MH*WeRUNMs9VN5fcZQ=>}GHBs};LS}+P-y~P#IlYJ0P8ym@R(0L;jYe*1D4ll zwDy~vES0HtyCCI2411OeiC>SA#1wX;8DRXzVihdy^T9BjrZUmN_=b)~n*!R4%Wps~ zkbFH!%W;I*pJZ#8%)c_#RUtKlOksrV!Y3i%vh>?b076sjL-)-NtH_t7E8;OBZOPa@ zAofQ3jdT&<%k!kzaG)7qW3j4HcvQe1&&jd+f8}J3!f+>UDx7H_B8^6hA&r*!PDQ-B za5jys`+BVIUd>7lmgi)Y&fyh!`yosPQAwyIh?7D-h2#b7);pTpdfDrCm->#&W_JPe zRvi?=>OgitOs_62y`!|JbhXf5STOdjJDPjj*#EK7D|Q>bl1&L=hPkN@2)(QE#vP@l zt9uJeTG&n{WG78N)aYu19%#`y%8i44oVsSwNLRxgR6hF`tsw;8VRy)COB4`B4i4SsLAa4`Y(WRazi3X`Vv!fMiDilJX?r1a{9%U3-*f6J-iKJh{i^La~ z$yJ?ASG(MP>=IKImh$g9bD7xJqR}YghlfIHszUwEmoF2yQ`Xet0HgZCGNmYge2TvH z+d^IF=q3{GD`-m8K+R-7AdPA64e{l|c4AofbmD)4hUvwM1bw^%@mXLok{H%R#q;qz z+gU3h@JZH-G^8$-2?T_&a!E51(fhSa5Q$w^j>=mA9b7)O1^G1VKyM1v8fOAgDLfFwlSN7aDkBbh=1Vofi; z{_|sQ`!zOY>fWC264~Y0Y;ZbE!j3Cqv4wlfV?E8SiTe3tr;ceTaXo*JV!Oufp0KT} z!>xB&7aARQo9It=F0Wa;$5j)X(=fKBtv5LhYKFC6eJA)BwZ>zny85O7zI6@a-&ln8 zLF2LorHz$i{9dO!8mb#Jp?&t4L$8*9&!)KTkLxQVHBP8FA!bZwX zC$1xtlqa{pU|8*e#v_V+#E4OT zjwi(7(vGZ$V!mG>tD`=FtRvSqWZ9$*B?GPmVd1ek!0@{$s=gg&_gx>I&W_E$e<7Y+ z5K(_sDS$qH^8rKPSita&*B->#;u88_rMf;Axsguitwh`|=XF8(EVlU^L*PKbu#TN~ zwj8|9X*SENE}$egSAG|3#!^5By}_`$$?RM3+{=QMMid7b`V01GIvvI+&E63R2wQNp zn}sc$*2c&2oUL%!tO4~7wk4n)tpFT)D3<_3R0r=|=}&0KCf!VqIpm|jC(z<~qb-#Q zZxk@2wJZtt%hiN1;J9w_Hzt9B+S-HzVkb8@NIl-+0XLm`=_dDWyDqXB zn&w}0*`hmpYVLH;R9>jKpbgr%Tssmku7 zB4?i;DJ=yE$6)n>a-tiWd=_(RksK=Y6Abz5;b5mLI|>)(FA9o zGzACes-Q@1Vend}5C)iY7*G)}1M%Udge?eW(1HnSXri;yq(~2bXQq`x;Yrz#0k&ke zS%JGlk~lDWC_ny*-Pvc@4#dzy&@`+2PkV%% zOIv<3)+u>drFF184*~^AoZL$_J<;#J>d$8hF1HEz)8d7HT$%mI=(a%Fw_CitukY~T zzCPh-wvU#V(e-YoddEiUO$O~Gr_8a91@$Jc+rpZOpW6;!qTct6s-1GiRv51Kzn!ku z>d;8_q{~ie0yF5Z-59^#vLXATUx*cq!zD=G$XZeu&u5Te*HqWE4IIDJ=3 z;X=s*MnE=AeJ9|E8#P5YEW>Y3>i7+gy{D`72zWgEJ6_;p$$k1u>hqEMJ4WhXT+1`J z2UoHdw1-mEKE?MEYBN#+HGKNk5c-SiJgPNDBrxIO3hq2zQ?Q-Gzn`%I_?VYp&dv2M zvIvf0jiNBnpf1lm=3_A6ApuPS)>4!*8O26GMgpxwaM6T-up7}x$fShgk;qe5v^RIo z>TaB#z4r{2{wUbivuj#sL%^MIIAif88=Zo8VO`(VhtJ#lK)G7`AVbhecjuza-rrB| zo4s>x>$20;IoY}UyhY=kM#Bz+WZSjeUwYHVtw){{#_rt79ybJJr`6`3xa`^N&f)n! zT=yimh90T==dW``)l)vNIle^QUoEWPPd=w1q+I0(zj?aa4;5EaZaQsy5FJ4LeF}5{ z$zg##sP#GwKG2!Ph}IYe2=jqBViZeEZy;=DiXR5O3_2O25Y~Q9y=cg)D}9l1=&&Xw&3l?g{8))$`(k@{a1p3a{ens7utuI^2=vshxrlD-kY-br`D+hAM=))3(PZ zpyB3*357l{^D%K-(OTUkjEoJ4X>x<^UfmPAA7hlXG?QgK21ybCZk1lxS0Sifv<291 zEjcA#Q%-#E!a(4PJtQIWk)#atL{s*GU*JZt07Zc#S!1%fwV7fXkwZu$LI=?Jii9b& z9N7&))d3Vh8fPHy4GD@Ijl7yD&?%NGuJ_OccYXkIaDN7{Ux?ntALbeUyb?sbz03s# zLfJD@r)GcJGkZS!PFErpG3low5RJ#jCL63{qLHqyaMc*AVNejQp_b+{ucvHN$a_^~ zK+n|6Qz^l#n5WiWi;#UEURyWC?C}74{5m0i9bm^jS=(82np)-?!p5j&Hj8-6#y5q$ z-cZx{GVhaJT^!E3OK(B$?9)Oq;h*nmgonr@l}$~5ny#*74^BUz-dtT@>WZ;S_3r_} zQNaQi9BKB}jHzND-dA1Yeacj3_qnU%q4vw$L-Baogt=3ig3Ri*h;4T_HQn8u6~D8% zu3dIGR>z7KUO$}07IDA zm>ULZ#zLtQpB=zl`Xly=k@2w#_&57?*Xi!kJ;wQT>Y(diU_s7c9> zJt9NLo6(QTdY?<&%(7s~gGuhxX6Ia@TxNd)1c%NSn z1vg!?!9F%t+BbteRT}T^ikFtgySn40Y{9CQ#s-^l6%*Z|a#r=PT|QRt>uzZ1KDuU2 z_UG&)_39e07-r|Hmy8d@CawADtYBN~ud`dnC6l4WwkC7cwB?%@#G0C73m(O(B@{A= zKYo4MwAZI+m;dFW_8z_0tM6&w{t;apJRSqCB|8-3|G^xy4{cteem4EFg?KyO^H>jM zvPiWhJ7a++c1XQBBKT_Aev;X1adZCx?O6i7i}=MPVM!{DFhM1no>Vgi=FJObSSzE4 z!cz06q4?jt9&?tl`>Ym||8Lbn@fQ|L_G8v#F`IpVs|l!&x&>B}_z$1B(XGyIsHAWY znA8qOJ=@^)4xPoaU-h^g^}_jK@kTQ7$?aFf|5I6D)sIC2%qiC(coF8shYu$ie*)ue ze%G2{U`NRIn<&=&^cNmI;H`MZjd~?#3I1s@KF{obqiu%g9@l{o^DS=Z{*u!j)-EktzHk%L~ zUeueNeuutfbuxAHnCfe9zB#!P8?xVF){CM-QK}``94{Bxq4Q=lI*@*(t$ z0*llTSuC3*FY_i0Esz=DU(#!`f?@wi{if=Z>r@~3asMrB8H6RvvkTcW)vbP8ZeWX4 zzxps+&i<@^TXl<*)K}C$u*vFs=c>O<uva_OepgZ3^mp(p%~u)K{5Z{k!@f>W^5N zctHJ;`gb-C%!>u<(kED#4A{XPx$+SHa}?%+(O6P8P)JhxL-2PKS-#1p!TbB=d;5nL zMMOs=yP`{Yvn%^wn}ki9e$C!VtI_NeVz`$Lz%L_RchA@F7J^6AM{gFM+M7MOSKOPu ztXH`F#C^w(VO);r;56Hd1-i|6n#b*T>ceqoYd9adu&Oc+x`?PF5k{oi7$_HEV@K2z zymA4)N+`DI{|3bN<-4D@&N)YxIVoqR5q@8N=Kc5COtz?XZfomYb%y==nU^drYn>b!5Ctr?PZ$sZJGC4(Lx<*GmYK3@9};69v2?xCz*86!x1fq z9-^Oe{|eU+0lSwM-%%oRlZiDYBcsgabpN8BFSM>vThx{{TLd#395z2-=dkJ; zUPumj_0A`QOXa%S$dG#HKaV)PHrXJUqTZlMEURp*D&K#c?PX)`>TojQ>yzh(U5ggE z+}3v2ww-mQmrPrgHX82`E)7LZ#9*S)OrYMVHZ2*%Ix2 z-f6n^R()lg_{@W9puD-%bs!$vZY>)VYBn{#u=iUtgZ1U*4oibOw!C4kr;~&cIo+d? zul5rmlh}%uY=)i|^mJ>IyR&mweFZIu_7x~{W-C@zr5Q1cK^!y+OU~frPEZqXZ04#L0$|tY}D-NPT^J>z!>2 zLk;VdDSg7vTYSmLjc%I1lCVSm>+G7BEY6w@(XH|*G{ zSt~)o`-!M-5J4aV2N@%gOd!0FRFIBn|vW}Drt z-eWVGJOi3H9hf$!nudR8+Nmhg011-@!@NC3DA2QVhVsnWtq@_vVUsn7Lgo{)!})lf zHnxUxXX|Z}q6~&9Cutz=WXN1iJCP;&D8)pBPR#N=xfBTp2pd7-lFF5XXBc!;f}%nR z1Ca6zjC^CAo!5Zpsbiu(lgpE2dZaZQmR3Pl1Nu#$p&}HOO1KhD0hr0cDxiUoC%PDR zz2y;b(?1FUenyXAUfrc`fgeIi%?Q>s#3O>1`S`d7)!ab-ztxcdp zi(oNgfzqrSy+Qa-h~$kCFl>tV#u zT0yo>Sj8|%X=Z5eLYl_j3H$wFA3GlQ`NIC8!J3ZtWgQ*Tf>iySj%6K(I%;b=*zAUs z@a=8sq4nu=XBezD!_2jBtet7FSqQn zIF@m`p^X#2_+Y@)f(;Nc7NdxOl%T-$NRFKpzZ*Diiyv-9$byI~Y_VA7@fF$z4H|Dx5g*3@-my-zW{NS^+s=4LU=S;5ULvFYRU7E$thNp8*A(h3CX5s zqQ~5@=c+ot#VX*Ndavjg1ef4*RI#r4+51F`-Xy>#L9~eMYl6w8mrb%>5bZT?ljVD6 ztEdNv0*uOqR@o*xU>7I~%q&O{-x-#ny*Sp3}O21M?Rd(O98C84<|F{P!iYQi+&Y*nsLu5^Ihu$V)k)=GECZL$l#xZCMb z%xz~?w@;eYGR~3+M_}0ce(?P zl902^TxqD4$DQx-Ouql3YC)>Mv?0+^0b7X9MdejK@03cTh{%+U%}ktHqQF-^C6`xw zO``FD0}P~L0z_&PDjancf@m?ZGR0TUYN{lM-RfudpltLzU;yJ{R+GzQ*P|q&zCuzY zP@pguLKr`*Q*oFilK?v&y$CF+j-b`jSz!_lC6mW>m+2px;ND~mcq=BCmMTz-PuXY< zOa5z2j)rQ{(LTN*&~0=Yh5whf_W+NhI=_eaPTAgjUu|FYx>|LuiX}^yT;wh{;oiU% z_p&Z@Y`}m`FN5C~v?rUXJU2@qOB4H#QH{+~N5*}@@#Jm2%V%+B2D zcW!yhdC$u$WMz8Y@Q7Sm;An!nZCaUSSuojY3}>m>9D|bq{)XtxPsx!lnpMKJ$>l0=VE#0Q${LhbVQ?(avB~M5H(A<6VIs~Hmen|XCr57cj;wDg~y7PjIZR* zau8CZLCaPfRJMsKeNi~1P;*LSAkgMF^Q=afBekooDqXYIppZJ`(kv}2%`0n&8lEg` z4=C(+1ET{^|A%kM#z zXK7m|9Wcfc3=~;>1jcJfX#rU|Ppz!j;7pMyJxd%-z##=(QTY&BIZl!@lVSAb*KE2t zsC)F&?X{LH;g7;@GHGHi9oIy36f@s3g3 zRt#I$TBG}b-9;4UrV$&5Ij9vP)Y;Np6VLT3k-c!=P<<;z&y-p^C+_T2?PjhnuA3&) zZg_w4iMx50MTey|GHd-~Qvv|JOonzEpncEx-PZbcYu(#|MF)Yep>~>mY?NK)j*MDlofYp2?IA zdWFjqQYB^@4u{F4kONMK_E=?Xxs$LThk3UpU19S{Nzmr?e_{2qb`9sV2yanqH0d@5 zKGJp8aZ;((RpJ-E(g5Ey-P)#3bab(6W+bgQb9J5E$fs<9fcfNuxIvFo=h1Dgwcy+w zPuTU(HesXi2ZPm;XEiGog3BROSUdQwi5UwQ_J3+1m1G-UYluB@01JOMr|AGf`7CDG z0ig`8Ee4)kL6qbPGy~CNdwL7bt`jNhr{b~f<0Mqx@25+$lS$DH(Vxp|&m0t?&qQTw z7?k*9V*W>p{DU=}4O&dJVTtJY(^>`^lPL~F6O|IFf&j!DWck6E9}tqnNz(gl(B;1+U04#Mx7H@PM!jr;8}`p8X5AFzRgZ z`H&lBbVagpDgs^cAL}3%1zD$XOne$PNmH;OFF;TKQt?TS2u1Xly;A5E%X>i&LS8)c z94WDnS|omqYiN=XeK3B}x+|c@HmfZ(WQ<~YG9AvJ!q|jbd#I*5WUrl&T>ys=H|eYa z=2P;fwY|sZguD`qxdX)M>uI;{{E0Cl55B`!K{}wLHeN|4VH*YnBfJf$tm5E77<2U`gq>@HG1qNC7Hcyb!M;d687pf$B(PUZ=T|xM7)L(EmRVw z;~E{-q~ZvOOr2pdE3KGuy*wmJ%9P@R0*A2yuAhIFS3E2{e{lXEPa&La>y?-W>-8zjMwKGjQ$BzcAdCp)p^-It?U!LP5Hxpchm^Keq$?$57$5a!Z+()BJRD{ z6WgCQN}23z-^iC&TytVqsnMs6p-*RQ(ixw2F8vzfP=&GB|8F?{vwhrLatNCSGk0hY z#-0-r+MT6XGIxqGf<)4vq(!0^mfU%UhXXyCkz}3fmG;0s&`8l>X!W^JfDuz9HUo@{ zuuFqpp>Uv)!psk76{RqQDF$&!v^n_ECT`}V@{zZoqC)oA7_w~`M~N|5Q|_k zJ;Up>vyh*=Kjn%>HQJW}(v6${w!9Z%lq8ZlF>@K=Ek<&|IT4DB~B~Y_O;v9%9bdID;FI$4}a;O}@l!+Yy zZ67)fU;`NEa8WOT7DH7N_&*q17&?q>qwQXMcFgOOnF<0N*-^sEWbzzvC)kr_vv+i5 zgPm2{O*$B>IAd@{>+WUK><(pc@%$Y%QkK)@5Tn}4^Ln|tOsDsh=f>O`Mru?jc?N+S zjv9?oZ;e0J6*s%IG6n*@)S#6c137i!nnDgDIU_YINmjH(${tUCloc<{sdVK)q-C~s z^SX%F!SQCb+A?8SAq-ab;ILesL&}?2F1w-0Zdb;3_7dq1y_J`mAZv20%2Kk(?Wvhm z?BgJojYahs`X@A7)HA9Qm5P}EkW30FIDr{C1ON{u z1g5dIMr=}b5GjQLE~kiOEsekhAqGW;iWew{c8QDP()f-j!!>b}0<_?aiq6~yI>*3B zi`CdXW~Cg76+JS8SL=N!|F26HjVUaAW#N(;&=GruQ@h?1{-Ra%60++(*a{-;SN={& z3m*yJzP9zU)P6F#y&<2IYIRcSWv>_H=QF%ksji&bymFkwB+s?s!OWBD?KvFpwAYaF z6HB9tl5(fq9jdFlXQI1E?Q^gHxncuVOg#lH7*|HYd$Tnnm)HD6gV_v+Ekb4 zp_-m+TC}!*?8^M?Y`$XK{JN&qk1Sq6xYYg&+mlym)o2Awb#46$jTWSN#;OI(jOptu zaCbaIeUAorw`cR3Q9bDuE~l}?)pf9WSllS}RTN5{AmKP8TP%l##64O+ z<9w~)>KD$L^#-v&PKLdn&JjL-V;0%hPd@a%E}(nDen@49b&%5#O-QsX6;-7Ym_{)3 zVl37&u%3X?ma&!7b)K&CFgV2vcWds-QvlU}1h5qyxV^(mlpUfHjzhVqKa?A?iY8<~>_=ad! zk8dO`rvOwQj>Y9oP2*Ot9wKK_hBC~WVtf!r`yU%(p%oD8e+cg4QUi%h2a{}O5}EG* zZ-HLS&Y#FkWd<|*0G}o#4taLmE^k0-iGxUlg8Xl6I@jpH*%~?tx@JuRJn#pu1 z@%_I=rNM%Y&`YFTCG|8jY9=GAaO%H4EqhwG9gJlaZKg1oi{db>rau>VdE^b)^5%>b8}?cL9itw!Y(Bor%WpI?%Pj4J{j!bwjl?n=A z?##%PqWmuA8zS)5vCxk(#bC(9jFU0xQk5C=7R7TRzMFn&JpLe}gI6mL{C!MbWW0*I zJeV8RWO=t%FK{h(m362pOLR55=AN7W`u2&T{v&qlpQUo)8&gl^+xyG^_=H+E&E8{g zDtj>Tm&AiGOuNYD{?mSBc+fDm!jX{TQ=#IZQaQll|>^G`1^D^SV zM+ZBRqk?)b(96%pKAv6kG#;Gx_9RUJOrL=Ch#REmXQRXa?RfD@|1DZPOH<>K-+Z~L-ZeSdCe_=8y zv$DFgjbD+f$Xn5p?QtF#T$_pgT|@$@QGPJGo8D>TeAt8fg6onA*w0M>p@iDdM_^a=-IIAa==ijmLcDs$P+!j}iuEj;;q_SK-hF(6t&u*(3 zU!LE)pqCz!$h##W9aWv*rYjeIUm+JxEFjgC8ezyBN-_G-vS}?09R$E(jR6BMU5U^@ z(V0P0B}3^eADjeW+@$S6T2jX+!gXXQh=c{DMBthD%*Muwk`k2(;0!J{>|O2$aekt_pC0cNlWBQj*NqU$H3%h)ui z?qoV$6o>@NL$D;;M02ATJ{}%ng;dfcXd{fw1p6fDH854f8 zL_5c+rAD;odO-?4m`z)jE@0QsIP#m%s{3yxi%G|qJ9mC592Bk*4$?J5vvrf&4==v> zL*Z%RPT^^~#-wiB-EW#fR>F=Qt#Nm25b;_CbGzR|l<+O7jV3LT3y%tNHaS?@`}o41 zF$uNZFw7Y~77Aa>jb2bAph2cqyb2hF{`0@kc^4I@JroH*5@Ck{3%HA7J ze{=QfTZrXPG(~C3e0zG=<=@}#yeD$(it9e|@}t3Eyl(l}7SBEY4FhdhBIcb^!*gCl znFlPvfq4vU4akQLkM!yPH0F@Xp4CK5WGsrIY#-Z~%66Yny0cS6LL^vZ{#CoPf547v zDOQeSMJf?e5Ldtea!LXg_#yu@^rU^*gZ%^VuaIC)(1`K^c$#TLNtk$0pons6AR0!$ zLUWQKxeJ{spst%xMbvmTKy*u_|1@&<2(Jsb3$Ne98JRk3nUx!DJ=x2tx%A513Tb^+ z6{A$>`g952ZR_y#^#BMQ;Q?NEWr8Kwqc!wGt6zh&EFKrvp{{ zN~{S=Y!iu^0Jos91XK~^De&WAO?3BQ!NF<=uyq~mg=ar(~#oOa0#k@s$PSzc6DGpZY zT%MiJKfg1}p{soS^vIIw;22}*cuMOjV++=yo`T|dD%z@Ov!(S!t0^oRsA=_x^+YR- zRun2H5=~%|fM4gQs|vMD>7n5f8#?tsN@5RaH1W^l8V#@Kb6(2f^@31PSCF5~CtaD} zHvqx#ExV!o0Lk}Jze|zj2?JMi!xC>^ZcUbx|8oD`UrHT5QaV&bC3|pDTvIB|$&v2% z6%>eP4*a&})c8hn-$b+WaF^U1-Y9%4?aZpl@s?;DwsrU3yUt6`1&HKhr(r4L3qt&ZY~Ue$d;q9YOJv}hM+5p1Omb%T%HEakh-=S^t}!cIW|NCt zvYY;N*Q~sC1sQXeEuA^!svEU*$tdANv&&^(v#x9Tve5*SsoPZk-nva@m)o@7>0Un? z!Atj^ZD6Nk^lh>fKMh(sMon0&1|FKqIv6qslh=z6Ed%72Dy!IIOJsI&k(zNe{r5j` zk_^X6`ZxFWKTWP6!%seNfB&|pQNmWNqVSmX-rpQQ`2bN0Cje~8WfmX!`rCUhuDV6| z?tzm(+(*>4Rl?Uf)zvuzW2UIDP+k<|WI}{Ib%x>RC*r31(n%p}+BT+-9GkW+IrRJX zl4DHYwrN6EI=PMW4E<6fuero2mvA4UMJq5i)7)epXyn;=e>z3@9f-LGcf5hMl*Uci zj^i)l8w{96&a4mrQ~GllC9!c~%TH#{M$B;EW?N3ttH6-F_R*bkE z%xs+9eK>1JJlEyUi3|T4SYbBZx6y2}B_?h-TH3hruKPE(H$8SVQM-|~4Xr_@In|BW zVgnhInnHim#YFuiJF;qqG`&6hB@?p%o1y+ku}Y5rxPFzA>{ANaiBNe-q$cmhZ(g6f}5CD+Sf>5JC1{YNhE(3F0!pqbX3(RwM@_N|c zFzw=ol!l+B7sM0Mdy|AsMx{HQl(76 z$#hO*p?1?0eXP0O(<)bIWm(nM?>D&fvK;|!P?al}G1;T~4{9s&3~cWA(L?15m&fK{ z)~>Hj3O^K`+eU6-gO#NfAS4*o;1-7UNR|0&(@~!?n_WwQKqAZxwyrJL|JM&?c06U%ORPS!-dO@oAf`H*?OVR=v)~F4S5z zN+5)YCd&}E8gy1RrguKlTO10oX1m^K%4>6G=~)DM_>yi%EXJsGuk#kUP6`2@0mFH& z*Y7NFja4Y}-Gp?I88a-Qs4d@6Y3k4^;uG$8HkVZ>6{d2Ts(+j_*H>Op!RM>kkox{2 z;Rsw5Iu&f8xr|1}tTY4tlHM>@EiDGFo?bbl;~Fu({1Z6Pa>+DgRgwURk+FuLorv&p zv=R76sC6XM%S1>W=qad%1G_wM3Sh6nDM0zsc0|E!6pSFE;zY!kd0?&wr8l1tn`~l0 zKjN<7P2T10Tav&7>10G6STwUFdt$Ckoo6!J;)Qlku~Vxs*jOESa`jr1$`w?}mAukM zx|OzkuRpal^rsm`;TczAm!Ag(3+p`9y^Z2s;Xjy+&E`xnc2|LnIxpPt&XsPg6uUf-7ft7w~JT& zfw+4o-?d@ch@?j;51V6l_vA4*Mm!^38vC%}t2Q0LXa*LS0U5%JS+ZNQ2IGMa4z4Ku z1XMXlM4({XWT3mXmejMX4KfvQpFUQG=p6zh1P(#hx0TaeK{z8y&FKjo3kEhe;iDcE zfcF9NrmRd+z#75I#zyOzI${$C4z8egkGJ98@%p80)mt99&dA=tEGF*_>L9oaR=CWYsR-P*G_o6S+z$z#(P~a{(6#ymX0~h z+zw|!lNvkPaUB%ja-FB?(Fv**Bgd~HFZW*OO%_;My4Q{$zEnTq*A43HRN?uNFg=hl z(mS>Jp)!boM~Ci|rMz6Z8QFl};xW z+VC;%K?kAOOY{Zm7ozQ4hK7!RFs`B9d6c9mQ-&9ZPv@IOdauhoi;5;SiiX_ zWHK;M)?aq=IP-A2oqKccL$m)pH~*+mz|;ySZZ3~)-BsluH|nc;xl+!#{ao9QcRBNG&Y@@wdtJbh8!GYyZ)Aw zzW!rQ{z;Ot{z+k{O^#r%wLyJLxwd z^XJOJx5eNf7|~5`*>4^z8HR_EXsbFq6_{Qh=&*U_cl%k zwM=iU2Q-PXbe70@^dA>Q@*j7JJAQ6|4-hly6bGu#Guf4I3#=NJmMq+jRMnDLMGTM8 z6FZqoQTr`j5OI0-s_>JgLyrB~1ISJSSW>S5iIM8Fd`kT8G)kmiG74kB5_qw%knBSo z@oyzBOWuPdb_$`9K7a)3Pq%~9W`D>*IUiM@0O!f@)4ww;cr6QD5gESP1B%!6;MicH!*-Y@P77+wB?U{(vm~ z0JN-bp*I7tds}$B|2Yv_ml9GUw621L=mG8zKA?tYOyL8Y$OA*gF20al| zE!BG;U}OpgXwsPQkfX7WgsEmUAWlI(Q%5G%c5JA@ zvU7cnaQC>*j%_XCf?T?a7#|JPH|92fQQw$ue`M)hN67HnNs*fMopiZ@%w_PtA1jc&hb32b{w#B}vxOro)&kk4QYrL#`LlzCOWDbu%nMm`flvZfG|KV$j$ z-FNRE&whE;GvWRhXt!eH;b*Q&eRI=I-{8}UJ`2g|xFh(1d6<`@`9woMA|kP%%i+S5 zK1F0WhSZW`Qt4EZc`V(MZsAXaeCedS(Vb5ELclEaS@QrmjTB5H)0hpPEE5EQNlSt? z21ITlh|EwEWF@giEs@COAQx(+_op}^iJXqHgKDa5asPlpLpVlbgj@6s?#6S zYL9`li=n^zx)AA&B=wJxE3xcTD*N=wh_LiAeKO-y5#$mc`A=Xw@xj(!AZfrCg?F2! z%%%|*5?(3e55O%Be>hdJWqz|Y>@NYc35+My#uxNsQ%rG0cZ281FRKs`l-S?BR7$Qh z-dVrO@Xl=E(CcZ!zjWz~bC~pbD^8Y^*o%J<{*O3DPI*%37d~UUCSH7g{XNT97LQ$? zYDwS3-Mc~fzXjb-ryofsKuafo;|MWb{O%5q#oGdD3s3+{Gu!C$mzxRqo(e`nj_uaPooI_7+V3f_n$&KXNEvegYzVOAmOI2;f z%Txl_vJgS~zx%NlOt`B5A1jvKoKv>6a#W5%cB9YQE}Ng#F-&RRe*ZmNFS`A= zffzY&T}2~NcH;d+T}$M2l)?WJg&c4iEkTi+0V>Z^9RNlas=*@uckms`6J|+}MwkVl zE*N-dTsD!&Rw6C9;`uACcs{*j*L;_2erJQvcU_02%bc~Ubv}FK!A+YVd~oxo2X_nq zIxLJ(Kec`BV~&r=1*4{GtdwIw_4r|;;(YY{D^5OnWS2C@x2K~s>682AHEryBn;yjZ z4?M8>3E?~8cUvB~Zsk;R?@dJv+4DFYRsX`H578avc%LRj22up7SnVaEaV$dP+@Mb2 zq4CIrhOkSI?M#gOW_%ee~$=YyOXUUtta- z@3Q5iMlTbdyK_ZVk=cxE)U2`ldFI@H5%zHXu&HYiR*LHY$S&l*@|^Pwk?pbS!QI|E{fuLT9l>Vn41g5I@&W>ri?f&GFo z2Mvui(Ha1iNH}VO&gaA?EjuED!@2g}wMSvNZckt@^ zbBcT{_aqY7%7ddWm!=M@i%rJXYvdmtmEHZ<%5=2wE#Ya?`{vOxdvUPHUc~Hq)u^&+ zVxd}piz@JUQn_L0+rqRxfv#aS1_Qa)SFTn?$r9m8tB0)&yDHj4Q)OzVO1NO^@T(S# zL(0QB&KiTUe&dAnr^5A~AR?Oh+sP8L@Ls*u%05spT>iM4%=WoC#%#@Vlnc)Y*M>(1 z%>k=bX=I0!#ZUiZtZ{s3P3^i(18oF$Y@`P&pb7q@ zvO&%Rinll&IO>Nvk;2BP83HY%nxOt@^RQ6}1388?OVhV+Wsgs0?25ERVP|+&EE0^` z9;D*zmtfJOHEx^cUSPX*CM%hFt8IaM+BUL@o;Mw^gE?}ONuG9OHsL}9goCExOl6k9 zcBF9hZPPbzo-Rz=Cbo417-4=XMb6q`w5^}k)dn8)rye-Nvy7(}Gh*3HgK@Lu%)3+n z3oI%!*v)_P(IJ#lCcqSZfges}9(VST_vZX!8Iyu_9WRljFOkeF&%DGjD#;zAuOeiL z)kL;tDxm*yaTD@D7Ic(j;`>P;SyBFLyqBneU^?`pM<(c}IK9OD2nZ!U*T9lL1{g;P zQHC5spChCsLWwhCBD+2mm(S2;iqgWTOcCcZWEYknl3hS(8+Jq-!Js3u!vGXFx%%`X z1GZyXL7}pT{gaax|rmpxnPf6C{R0 zTib|2S=j5#k%yaW)!9?dat0A=*X;8^v`SQ&KeDAp3DgrAcLuh@xA;PZBR zg`=d<4p03_tdo51mGomi;T*5W zBR30JjLniAk}JV|c8{b_@+!PN3ED$3pu<0a5gVJRMq0Nr)(md5j3YKqt%Cs={mM&V zt(QUujwTQ>MqnxgM4FbD0^omUM`j%X;ov|kMM@GAVteUvCTv*~XK!V8i8e-rGO=_w zoddypK}UkYEyU(oO|oKfA7hGR%Au_RIi%5mMX8P!NNn^DF#hO?MyUXe5YZ^CBuAyz zAaoLmQ4tEOMf%#4pPP{;jWHM)?Ifp@kt=LAg`7AKI~*z{W3ezw)pVPUQEMy~jk*Wh zTB*WpR!FsEi}0SsqLk?wqmj|el+#Tnl^ko>maAr>%xuC2=oZxEl4o@~9aI9XR%h1D z(rWcqJyENP-l}^|YjhfkRH_Dq0Csag*5}@Ne*Zr;M)&xhr-|1PuRQ|g&-ss8aV zHQ)cOM)PgI#`o!W$Vm6yr&5JrWzH40eATw{n%~Tk@(&l_f~OwphL< zCqVa}HZY$G%oj?XR`mrDRG?uJ%%7|Dde!ITbG2SC$p5Y}8a2z$XEq>ISjNkZ>1)ov zgE4B@ZHNjMe(1B_iMB^&AdI3IXEcx*Chj7 zB70ZAgoM~V!p$$OCVPKo`w;0RGhZ4!{v}p2VcgvrJjUJQ`tKgHL2`y{a5*?8l{pSS zVw`E_9ZV7@{DRZbcUGeBT!b+Rqb4RXao8LXXKXTqpXO606l_ghxNxwE%@d7RW#3 z3UEXjf7lI6*9ic+0Pae`^tPR>QL2SMsL3oEYnGOP$E&ou>S`~7xQVo(=)(GU4qQK3 zr?C@W$tk9f*D9E@M03cl(WrbDVpAIxG#Fl;5L{*BOWVj61YAL>qYM>lvf-j@87tpW z>ZJvtU!o^7M2?;aC>6H~*pz?_@A_f43oiSGu}SQ@oNif|jUiqc=UP!8 z=>_F32*pk3PFPZ*vcpA%CN-p;Wxmn4U-oTG7E0BO+K-oF$b+b15-I&yI4^>TevPA| z*`O%f1ySQ{Y5ZqvdO^$W`%*F%#Lt9hQ~Pdj5nk<{#WM`}1&EZna`}}EkJxL5;b(RK zf@)(^i_(k8hi0cS63J zs|Oki5QJx-ntFo~>>H%pY^E}xqM$b5MkoYvA@~kW?9WyLsNftU=J84%FU=uI1-qz& z1e^PwZW2CepU0^YenL2@YGH@)Zu1jQ{eo)vbm78VWF|Q$<=}w5W#K|%AkIaL_Q^~f zi|eTOp-#ROKBVnH#1e_)P3HY8s08{;dZ}0gP%Po!hLQr;BV~334uMWAl-Bd--#Lr4 zPP?Qdr)gAseNmTiQDw`*c6`PC1Bk z|3&YFAt(-S5J%N3gxme>D{!fPNgp+SjP6|uarzfLH$e)iK6*+D$1m-L*m8QjAGFH^ z!4#H29_}tYGe9>0-gpLnEkFNVf|O((Fhz0>mN{pkLJV{|+nAL!+nm@Nc5q(1;$0 zM^XlI4futW(0Z&+Dmx`;z%>=+F$`--08{c%b07caoO2rfcx&P4E_cI%*(-V`x`@j; zY3;gE`&aF}^~k{oo~)8NnyMR&zN(UV^8aqFW1e}|cCqmFEzbNRLwxxa?}InfKOla<+Aw3N@!C?SkfJo8^8o_ zI-fw6;_#rs8M>Q+4?{*lf6ip$gGD1_2)F*3nIb$OJoLNYv87o1MtGo;=rMVHc^Mg* zzJq)5cfvzNlfHv34fMZg$+Pso7znVXSU~|SIp>ji?}fH(>3^H-I{4m&4?q0ywD-t7 z&`*A`g)pImWS4M#Zu;G9Tl!s%h6&iR8RREo0+8h2rQ~oF4^Cf%UjrF-Vx~<}RSZ*I zE(2MIVn4)+wu!iV_&KCBJ7WozHtAvFJ})oAL?hICnfWHzmC33lUvkOkcX2xQWGg~> z@BaL}sp{L$pV2vjL?679*l!~z{`9L2m(0`GtD8C#ot^Q#F%1oEW0p0nz3W%&ub4Tl zv7>Bsdu8sZhQ_w8CH3p>X8H^MuC2*;raREK{(9zN$DD5BT3H_a=?1Nud0!pn*^pUZupA z00^Tj5tSm3ES7<&%$QX!=9c9_0)sU3X6E^ShyF8t!uA7Cb=}?d)XA@&a=V}EW*W(c zOu_RclPZ>-{Zx1NQ$Vf%1X5Uw9d3Fmy}|)ud-_SSfJENUoGgFpK<0AjCt1h|evE%Z z;>VXe18_1@Fu#N{v}Dy$lYcahh+FBgOa3nO3B5w!-!FNJjDG1I;T;eXh*@fdciwr4 zjDCtq-A8v`@^_NF?=`aGOWz0iLhnbEgMcy@d_;QkKk$7ipcWA}i23ZFsLEMr>E*^m zNiljMCxS`D0CtQRk`;cwZFtH2PC&AwZk-Esg4y{wTFw0ENVACmqI*lPKgx2}QEvCVye^Z; z7cdw4Cy!~hT58(tTvkqTwpOE+DP#Ggikowbz?sCpE1Y-gkZ|y`3z*$+64-JWdFkBM z*Ij#OYe`h^Gw4gVEuZc6IEwvFsdR;*#pxI9Sj47n+C_64wj)Xcy{3t;pT-^ zp1g)@-ZnI(|2o#{s+>8q(rfAp^75*M!p%o28Vqk=(~!6B6Rq}RU(=z=?xM1(WkubU zhnjpJYqg*F8xK`aD#}}&S2U^mP@|C3P(crm1S=Pk9!@{A(q$bR3U-;imDb8&gx;j0 z;T429XfFCd_&s7}e*eKm7kxl#5W7Zh_&9LS%OJK_PssaKWeGE7bk2mF(NjBbZ8CnPRDNY_y0vqvSTwEU)@I|E zO68Zv=36_MNF$?~kh8xcr^0{F%jpBc+=KqI8uz?&m(F%qRQMx)?AV_(LB-(KX^Hq` zc*ZkN%k29pbUyV*rbJ(s3^CW0uoy3ptf1(|FpOf9QHdS+wI<@yAcjwBu(VmQ6c=8m z6b?EH45R20DOnSoM;S*<`PnH@ znU-mbX3h<@cXoy%caE$qshO~gkdgW$q6rpc|}mM zfW4fn2@zHg?ak<`h$MyQiiQ`Lv=lS5hhmgJXsl0?YsZi4E)8$=c$QBnnXh9F&2c*$ zo}1qk)E{n2YI&bMPp&&}lpO)v=eQDNTY=41B&;b>thIE#&z#?7w)+at2l>OB;qvN; zop}qqD&bJPd~C*5L)|+2Gh=x(#-YO)hiLs$8|GplsgTtp7@+wT*fLZpU7J+vUEW}w38eItqmZNf`rIh|C45G*4gvtuv2ThuDXc4 z_`F(~o4xr#n>-TrA-kYAe{7|2#8J7Z{f-(gd;Ga>&c1)lWrqs;pUj`koHIS(pOU_D z^8LS$#%g*dRg)QD^LVnOJea-VNlv(W8>d}4abi{VBvc^g{(<%>=A~8;kSobx+W^dd z&`(FbE}}m!n<$swWH;yBxQ58)FmSG&`4)_se1oQtH6u;oagR#y4*UV% z$RlzEQQ?Bxx~KCmCdnIwnIbM2*apCK_K0`0o;qZC^gB zrnD~peLitnc+7HIOQfYaR@=5i$KjSiQ`sTL}ZLR4Z5zHCAtN>{bMsjN!6PEI-ku9@ESMg(;v}J0-^JMuS7w0b5 znX@cD7-?=8W)2tRaCYfAMyrX35sT!5f6!STjzv9;6_lBvK768%HD@<*NHttQXnIdk z?y7^F`IN{L?uU%rCUVHqK1zo@akLs-EoXkZnBZUz#7i_Tpn#3a5+TYeLYd_#dc{U1 z(h#`k#S*5uBs;gUF*loal*U~7`L0;$=f#;4=AN=BEs2&1-}$2Zg%57C1^v#VI#-t> zJzRMAY0~-3eWdazv*eQV6Mxve+y^*iS4kA#R|fn- zu&3e;qG3vLMn`=l-=NG{P!dW@q#yXDaL&2329-vr{@Uo%C`>lC=j2i0{4mP|q$wR{ zgn!v%CnO%Y0uBjp+Bjf5$TTk4KkHU)cFe@~QB_pz^SCGfJ*?JQKf0@!=#AcW;GQ7N zoi;maX8SBB zw0v&=GnX)%`~NoZ44HYcOdJ!a{DCi*(Pc}iWH`|I(H=k{g-Q{v<}ma?m=r%QWf!J} z8H0%E83q-u1cZqn?7c^L{#>B=FH!3BvbI-O&wt|5F=H-$V*bp7Etk-A)B;d}v8Z?J zB4WCFFCq`qCkDZL$3!R|>lU7)++0^}S32aEDj4OA`8fRuuF~3gDH32)EFsOzy=Bgl zbuV3)$8@b(Z6hmq6?u zdXVtQzxf91Fn&M9rzk%aFfXVsQ6;NGq(q#$=}<**)WJ{ZWib+A-;a)nqTVnf6_5cn z4t)>}4PzEXog;w~#$Z1ki{Lk<(qh}xw}&MofCb9!BjRB5?P=tIsR5L1!lWmvIA=!w|rhUdd}Y5$nj z@Zd2XuQLzdk4WtBzY3^hY>D1*R4J-QL@7{T4h1Gs&|F;1!b2qrcn-4Ri{yl`y@Yd0 z*^pzgBXmX3x!4)Jdgi9aQKc`rW~P=gL~>^9sMO=stc>u zp1E|DPH z1|+>G%%}<4&@;lb7~m`>2842kdFnKRX;3oaB^xJ=tNn^$zN#HJY2(KGHZfn-jm65O zv2|Y|sE=$MDk`P#+f=niuhp-qLb%_?NizMK%8mDJtX!j)P1?vF8!9)6SVmEIG{8bp z2aE9}WF=dHrxwk=qJ>vZKCOv%Yh zo)At7f2FjnBAx2PwiC{psVaa#f^a&N&m&A4FlmWM^^S9%ZFIKlfmIcYLA zle~cwab?#R3c6H?C69~O?j5+5(Ku}I{&=DcPF1X14!C@Ld06RKKXaA|hyZ9WLm+u1 zYU9HRsSL0LRFN&gn`8*8j+(;EIWTVc&J}Lr|J??}oqO%vFY7Pd{Y6}OUwA+M#qNvh zzMOllm$Y2A^8D}4UwIj6VU8R*BHYKNenP=LIsAo_?BrvlN&QmChJE`sbiAY%o;Ws{ zJ^8}+nDF|rXml9KiJ>Kc>Yu7U7@IPDQ1zHiY1R;GVYn5!>kiY=A@hYZ6D5!jXKm9F zjgDUbX@8jR^5dZ3&mH;m`~C4Uo)bA9>NwaLyc_};espuXotf1sT)&St6D)?TGRdDT zPCw<2Figb7ochV#|KTi>N(;hPVQX42l#brCNgD1 zvWp5s5{;f&-4$_d+2V?%|A$k^r5fdYhRjiF3}qc7I;+Crs?HH`C`>$a*KxQcE=)hS z=pzx^E@g3}=pCRZL~ZT#1ON~Xut5lx&eUcc*{uON08|U3d`6q&Pp<)B?F42E1NRRy zJM%GAHH^}96C?Sr?6UqhDb*1YaDnW1aE>TLszQtvMYxNSj>v)_3QAO@Im7ql1+=foE6>vkVT=e zML-E2DW}+g0qxjgNR(UI1)Cq(jDO_2P2H0>Z=T$}>HXxWlfN2Uojavei`8=j+%dd!-BCV*E({dFq=jrOQYQES*I7_41O!tkCj<#5M2QaG8ryvdqK7=gu9TZr8csspKTHAy4i_ol!q6 z<&!|m64QwpObHr;Z$XeC@yn?D)x@T*VtiL!l|DIvw7dzSd8F_dSYno+%Z(I9k_YJj zv|M0aC;$HDo7~;~Dq$pkFC_j<8=icM@OSfRWQ@v%95YffhmKT`I%QJSENWZSf?);l z!poo|oEX;_!8Rr%>f(a^n0^QrUm-z17`_DZ-=T;mxdE-G&1&Sa35xRsy&xnq5mJN0 zK!wb!qvfZ98jkQ>%^p&%D|XmjyV>G3!aoc_lNykvoS^23*1T~x2U{uIUmA95?=I9L z*Jlw~^}!~T5!peeSTkrd+Vf# zRppW?oSGxi$X>^L&`5?#8hsNQ=(QGe0tSE&-C`W$&(dQ$TdnBh+>We?VZv27Gv#S`x zZY2OyBt_P2SMC;6st1M5LWQvTL6yp|2gJf0<7BwUm3uT-o3rxrvdkMw@MpJCqwJhC zsZ*&j?k0Nqf?0WWb$PpuYUTD_yS6LUDAXx#+PCi}1wHVwKmF-3dLTu?Q9A&nV6oSo z@k-UhPdpYrmPL~F=$s-#*jh4}6K)VM{Y!r-HzX`A;+Gyg=WM=6{lGoW=DZ`R5fm3e zUJ!qT%nyqa{2SQ%$wGES$NUcb69&&849DX!S%_!9&{1|m^t$s{#zpXjSU!ThAZ`em zpMkBPEKH+)mURqx;F(k6X~?W8PDi4?A>1LBv62%KdYqIl(To)^r+k4rkHRibtuKrp z+A+}kFuI9BP}DF9=o3}v!~q124L~~#QGm2Yp#;K80}BN8x{HW(2&G>btrLYno+H9@ z35Jh4PFn1&B4`XL_{g>k=KW^r+_+su5K}zr`hwB#F1xI|d$y4oOH{&}z~X<*=X;n5 zfz3sWma*%`tr432PLpt_&gu7BDvm9EuOiIYq6=p1X{ncj7rFYuMO!}UiUBs)BTs*) z1o`Z5JrSoV`*u2pM+f-Tl<-D7;B|slWs{gddl4xwg@uU$RM2QL(h>#HgZf$A;YVLG zl0$wIQT7Opo4-^W&Ft;P9i#4#aYx_(jN}G|+H66>&7adGyzLmnne=3yCCIN}dz^55 z%q53NnLa4o_=l&E4%Pk62f{t%3gK|tBrIdDXQSypVUnQ#)ZYSK&Dbq7n*`JDF?m)27D?iLX(kMOA%T@ zfiG0Ffqf_p6^<=Uz=~9Qb}N=Wa;dfq39?xAiLF(tr0^|+?3lV+4bD}=FZvDP!*|ZV zleuo#==FO+)Lay)iB4#-+S-?Fy@|QJIIp+>9J{11)nNVZ*TGkL-3_oO9~YaG97`l8 z*{J|YePRu82%1q-h4#rUt33k4Y)Nlow(4E0rq3O23t7Bbe$|x$vS#+eW=Ftc^%IBu z#`5&R9&0=M)JgGTyx2DFr|X7BOXMQjAPG%>5=Me~z-OXC8J2#zo#gSvuEokmLq13>Ks;moLJ;z3yyYjIm? zg0+BGvYJ>*qa~#P6T$wBIE>PGX-G8vh!q|}3>8NeL~*NpU@c$^L@~tDK^DVraY>x& z?bc$O#cGkc2@KvrDU$WVlNFHR@nrPQ)cb{S2>N5OmC_7h^vhB+a6Q4DaVe_5(lU!# zw4+1&r_Wz*i%LbWS3HQz&{u#fCNW?^PSAZ(dZ*GecfnPx^t#xIhor9}Uia*q{^*2( zor4b~3k1>VM86!(%Z+PMc6V6DU}B5XdIGL@P}a@}*xZcN_4A&%c+8lK56{0owQc&0 z+cr&|vU&5AsnfR3n7%D_{rtmp-xKq$XXeNZGSNw8Bf?kHe2W-ikXB#O|-cKR7uZ5(TT(GVQ1;IKD*BA^?N;j z@0}ix!ATR1xOEQ{YHbdiSq;J%Z=uHSbC@*_zsJ8-uF;r^io9-jp=FLI67~A6TB9W( zn-kh*Q+vJO4pAtKQNPEeH5!aIo6)4#n%(}Fki*jDi6SSb_5z#QlcAS z@#%&1i23tyME{#Ci!?+UvreNCDv`Mgsb5hG8a^*#cNk6fiCMnPiX-Hp+aBztPl4Oh zyHn6D*0IHn$3DB=tiNbPC^UlpZ*J0?V|6jJJs@Q`rA}qn+Rc8tYS7vYi29IOYhBsd zuG*5FF<(~HWYziASy7zd5#-z)PSo2q#2&G$?fT0GFSTxP_hrrNTFu!t*=E!SBi0Cg z2=SRH$2YzncHm7u96A(;d=Z&(Qi-??nsK-hIGvf`4q1jA~oib#XKO7tb8)6w1$r@c;e$bb_`&F~Ni2jzvZn2Fw$ zz~B)d_)khjggJGS~kwcJ`S$EEhn$FG)b)C?Be?Rg4{?f);@1;dk*(~!#;TB_6ue~koujG{(Beh zUbt{KVXkcLp4__g$fK)QtXTahxoGr)j=G9-8WhCenK&*7rYIphp6F!0FZDa$cKI}A zbC$PH6CR9|P9~in$MVcdqgHQm<%JWmV76W(Ra?!jyjZd}yEEKSQq&abG|$;JC;bSc zi%r_Ko|C*fHU5MMZZ-d!_K;<@%9@Wx|6OFrky`ijgBLxNotf;yC;P z19KdM9L-wjp>Ck8BG5)h!T0r&0%+sf$hTN2Lv zkjxKXirD2~To#O4g3+K1RK6xdDPT%wEeGp9$`BglwrgN{jB|EL-iaRh)`YmW(^uJ7uLBa*m(&$7XGI-Ke zN;nA09{>_C7UNiom=;}hVi~*+tXPQjh2p-!$Alh2G7T7~LDWZk#B@Y`_||eS0j5c8 z+}MXS8)x<*jNC9-9f5cm&Im-bpfa@rDJ#}aeD&mfrlGy%ww*gk?W`wa$f&eubjT!agn2CWzTsF$9FQLv-MyCyzdwe%0(XgSv}M>Fy@F$&>plh^`XnrC<3lF=|wT zxwE#mprEjD7ST?yA%cmit*xpe>+d> ze4^cc(iT%F0-o}GzhxHDd0~0Nw%;391a(%WY$gC>p7cuGwE}l#_6uJTU3%q&Du-Sv z1BNQ6(xHc+GOV2wta51Ju2zM;w9pK?-$vo<7hb5Tx!}@jjIK(9#}tXZhOa3(4AZCt zeR8mWs=yNvM86y>IS;5hz*qP;0}qHi0D~PqBaSeil!iUQlCV3>8lbEi7?siLw38X7Ay0^wp7>Q~U9X90Kmz9u zGh;-Yf!@kam`UQaU~ zKC^g{E;aY>7jX`w7r}f$FY=D2T_qmcXkvb7<8v^QFe+0lBwIdIEMQiJi?iI}QvaG9 zFIlAGEc-(x;`Yw!xJj5VRhrI|!-jRvUkNW&`eTdRs$1-4wL%XTJcV-aZoPtMmT%{l z$~8)|v|`{C&B}j2h3Jt^>K>w12|Y-kXd!bQUbiuM2zE$ z5%+bOo?z+mdio*1I#~xKh1Nl9@bD{9rvijuq<*AxPY@W|#D%3Lf z|LDW95-oJ%uc7PzKjz*$Fsdr;AD?r})J$)wlbIwl6Vlsc5+KPWKp=z?2qjWO?+|(s zVdyBJ6hQ>RtcW5iifb1!x@%WfU2)a5#9eiDS6yFsbs@=IzMtn#5`yBo@BZFDewoaj z+wVE&p7WfiejXa4W`Z0o=tf#%Y#8W@tEJz+IKR>U~HRPH7}){FA_g z2@RTRpp84qzJ|6Tbl~m%2s1O8`iyqZ5(?E!d*MNCf_fBIp0pN>Y$)^p^{g6c-qdT) z2G|`q!rdp`_EOQ1xd-;oeZW1skI7UsOBvE8XfB>qbJ|9n@GEyp#)N$*zuR$;iHTMl zMb6o*mJJixJe)xE3Q6_4>)`+&0VYGZT=+r_+-_y*&qQ=9TDu^?KY|vD9{9zI3DK(5 zME=Du$arMS#9PPZ2`ya}-Oqi0SJ|R6){pAu>P}GuxC!H>S(E&)JRvc zK(%pLIt!%_Ggh;J!P3mN(C&zQ%b!{2zgdp>O3i+p(=nue_40cDaryCg10&jdx17tO z(^oG`_H-m)1cDqwb`64b;Smyx)_@t0hzGhdMCC4<9`|!TD8jm$rK?L{m%e7ES5xX| zjVv*(Fl`#N^Ymjk_TQ;du2gC}db*#$3;ZWOD(u{Xf?=5$H@|z8nKTK#24ycWnW{7M zAKQD&^LZK7DvgHE{3S1zo_>f1NH&P+M;%Csfl8EPu7x`aIkw>Sb*g?XAd3zsX^HUS z;UC1y6~<^aDLl9k{x&4~;8i-HtfOnX;mQ^KYx5>mteILiZ%SkHXs&4RwL5E-R@LO( zM6u}hNxwS1`A=KMZudb^r4d&kLjbo*jB_XUZm7xw()$Npp75WZModdD;0bDHwr`R1 z_{sVCpn^HUU7WwBZ2nzSn$~Q2(Y)xssf8Q^yiQfaGpCL)?csqTYl$*OC+Z@HVq^XB zOye(GF$~=Qgsvvqt>JX}F)?~g{W!WMD}jH~8i`yrp|6CFShk_1l1@(nOjnF*SpCVK zPZ>c(Klp(l_zKcZz|T@YCZ0yA0EZ^D{lW`$b84Z^U^;j-tpQBvB00=t(w>;jRGNw zHbmPcyBkeUMyN*Dp&<=!4Z*9_kr2sB-A2w*DIcMAtDSr>qu8;Cw5OT*sv9K9fcGOK zSm!4y(a2K=dfsK5;!ihJii?WuI$xqIGc`8d;YdoW%gL@wbJ?B#*wjo{qOWdT^k9m- zk==Ptc1~SdlEaZs=lt{%`6zA(m=DT}5dFZ2(yka(5~#H%rX*T@>g=_aAidv5RVz4Y)D3sGFSTS2r^}yJIAKH`4lg%ntx|R z@g|#cj@ugfX#OhfWp`jJqBtUbHkZ4DSHKDHin0O4ELt|2GH9gHaP!L}3}X%RMu9^v zuS(%Jt&VKN;Q3N&Y~gBXg}t%bWVW+k1Gq)5L#s5@ZkEsLIw^XNABqBodZ8Z+V-=0W zNfK@`WLS{B9Hl>p2R#J6Cms(mA4-IIVD5qlOg);Cpn%vztqY4NIw=`LQ{iB&^7#Wa z7a&uV)>V||WdnY{zt5auLkdb=`8s!>hE*dQPt81kI ziO)fk1BII*_SGJx{lTuOLY^sHz={3|Pb?n%Yie4$M&R<(ilKI}PV{R%0}AWba;7QM zlhO+kSbd)<)y`7?fZ^f#8IR88g^8yYJUP*(>zlFUnxzNtoZYl6N1f{El@=@+k}>b# z?4Dj;?9= zS6nw@ob*rWHR+$@M%;ibXjl5MM&Dm&83`?45etEsp3Zfah6&wn{SbZWiSl#g2s8QF z!b4X)kx8BIv0a|9d#)&qO#jKn1JeLSU&g}PO{iQL9$?_n`%N@9{Doli;kV#$3Nk1^ z#U4_1qX>;tNcxH3ovQtK_!)Q;noSJxssaap?qI9Elad>s5bi2j#ytCs3 za>OCS+>#mBw~`ecHs)WC{zzU^cx+5Je#R3lToHj6;g(tCOO%@6wkpq&GX4R1 zbtJ>0R7-sa=3topyX?tUg83mJE@(3F#$*?KY=Y=`;PXg{F}hsA=r60uXOmHR?c0m~v#F!u!V#*&AI! zFCAz1AzPG%yv`L)O!?wt1!(?ra)UJ3BIHo!{9Yy?_5{>Guyf`FChX$Fc_I zzkl<0r)IOI1!D?xv z|1Xy@#d)U%ppGeWtaJ{l2B)wBCoHNdN?uM*O~xylSFjm1X(4SGMWdi;NKxSuf(5t$ z(yq)xWA3qIH}GW;dPcJn8YKu5f;{oiO;wizg-JCFwS~i3j<8^y&6ATjN8`%xe@W3ZTPIsDF&xo?<=iJvK1bU>vQqQpAR2|98e;? zywn>Lli7c4!^k9)D%NBa68o3AL)UnD;d+hQ!;L5&d5@<^J+vey>4Buo;w7UeC9Ww; z>UC`7uuab)c08w7zw+VUfg^7(8}2hqI@xh>QPckSg{{)#cJ`ZoB^^z5>Wnx}rQ)|t zm9Bv?Y4QiD9p9(jwKLujJIq}-HB>Ae=~c1k&Xe~rE;Db4B|o4OT`5J0Rv@-mt!atz zj@X>-1Cp1zVgT55j#C)|HMfmO@q}V#n`2Twx+XYdZTw(Y`5GfTH>Yk!#zc-pZW=AdnU&ctSGLmPRA#Yl%*st2 zE5@3|99PQ)1!p??$QLg?_qS8cq3YGk^9J=x+wtQaLmvIzOJ(X93s+Gg81?GDFTVN4 zi)CtqLG-vQfkdF``vU)J8+thXfiD0dYXo1A1iUiY;}P;M1b7IG9)w;9FLlWY2N_j$6R}D_C#tuFLyR zQg?8Y>?h+f4n;=rDT>*O1&SreUa?-W86MDk6bIlb(X6-=xcVo7u>QE>DaBdEvx-;o zHejCOiI7E?piCY_R(m?>8YV(eH+fkc1o9v@DE}J~P!EEwJy^lDDl0jm&=M6(WjI1} zhsug1OnxZaJWem}2`>S^DmBPMa~QOGSg}|L3CHQ+J#ajM_k+p-7#qsBCaS65;S<0J2iW7)(J59wVcB6%k{?6%EJ!OsS@Utz_$(y8; zY_=t%V?5*DFrIlzZ{ki!YtM2>w{6Pe9$-Sq>~eHS?^dvtrb=lv8>;ST64@AOhk#MC zHzd7!sHq55P!v@j9C-9X0WZ0+LTk2bC|f@z1F_*7DLz zruI=vvH$QnNO|>oNZOsqiluu5BhEgp6xpgOR(aQlPoGxv0hs4a`qNCWlU_c;dVlqi zTDma!WiF=mlT6^9KFbP?yQEJ)%wpTyIW&YF?FBzULCQyRsUJR;KJU0*`iv#~`OnpC z4l-gG(E_)Pgd|FRRmT4(%sYi_RPEM6;$3%-Z%5%{n>c_iJhrLhpPL>N-gq#SBPHg9 zDzo{9P0z5IZB?7kp52`GFuR8^%q3e+zbL)g1bTBFEEJU4yBB)6py1I-C^!=N&1nNd zCbKBK(G8K1;))gUZ+7rVPAR3Vw7t$6-x$fJPaG&+8+m@w#PTMtSUR>8IWwlE8>A1U z(8^i-@18xi?eGFN_%(Z7r8sxBlq5ZS&Db~Cl-F;l9Je^~taR<5acm>kyS*=)&e>K> zn6*kON8)>1LFFjt>#TO+!OahJ(gx)D`j_ncOO%}4G{JPx7gXF@3{UmqLN~)yN9>Bc zpC>`rSsX-oGVPMHLph6`su_njt$XR&Kiz!upPqdwyjDEi%D68N9r}`S(*JBYcVz9o z&$k{p(E9wnYv-(faNH~R-S=Ja_ctH>=)vYCYu{Y{=JESp5mvRUOUK`Q^Y~KX!uq*$ z+wUr^XJ)0&pP$0-5Nl^v=I{ zJj$bjzVt*|k!cGIjUTvd6KyVeA${ty&7gHGB<#Q1y14zTyV}$4`fA-A?XMQk9G1;8 zp5EWF&#>*jJebfrN6kWh2{r0A9OgK6uv*5?N2oX#x;mx`pR@Uo*GrC8yA6OX273VP`NcBT5$Qr0j?G(M{{P7piqRt*) zN=el73s(VL`SV{oUT6>g%o)xA9Yvu3PritOk*PmT7!2X&#aO|Vk=pG~2a{1WGXR_p zgE>l4UMm$H7b0r$wzikJ{oJv(mqs9+QS`6EILDZbuS@=&Z5%$wIA;~Ut2=)?DwiM7V8y|a2de7gte_wyolz2Y5-{hoV zNoufec(7NxJ*CD7ZahunGQ>M#l7ayb)Ka^pQ*2}^2^dYOPAi<uj~;F1rK7F4-`>hvE3z-Vn_W?n%^t`Kao>fq*aO)WY&#u0N+&ig zJ}Q*7oyn@G$P)Y0@>jpY5>F&PG#&KoJ^YRX^+K*%Ss=<$$y_-}L{UXErgc(E5-&jp znr?_BbPwuI#L%IiL?tQGQxhLhEFNIO&2PPbbo8M$OJ>hnvg%;{q2Ii5`}B85i|$0V z!QOX<^!@rRpKN0Z=T@CRx@XJQI$o|_piwYoJ1MS+k z4@{;Nph^J0Rz&vw*R{6pWnO9y>5qG@xbr22mF}0)L#gr~)}4H_qp>6$<~$925GmFS z&0^K?9>3KCfKji9ml=9*)MPGa_6R~d<|%laTO_^BzGM?4)z`l!wMngf1bd$Dc#b>y zn)D5~h>eq4r8agA3&T>^5wi5Qbc9S$4}>iqA?)E5ky+fW9UZ(72IOS8<1gH;@(K&j zloXa+bBDra6BOoL3kUoHL_@>&^ECv-8f4FE#sp1A{n>?AMziib z$qd)|3UYAtV1Drc0u&k(6_1!N+06DIJd)YHfVjlPDl1-ccwBwGrPxwmkM*Bj&`JO9 zczs)T=dI|h&|7Ak>vWhY=o3EevYFqaC&{Tq z)3qak!8J0(ysUS8nYK5}M38q_I^SDc7B9UZ{n3JhIN{&iL_m^m`s*5hGQUi*X#Er` z6bg?OrWdP`5fltDi&4H2EUat@&_IR9LpUa5W4Rg%4tUpe(;Ger9WZ1j`qB}QTf#b^ z3yJPJRD~)R&xINrsUgCROu=#5G1XI4iK;2pV}O@}KOO%07*Vf-`?EeR$EwxqVsv_~ zH78B)v;dStjN$1NIP~7JcXh{s)q6EbIU@q&-f?ixy=5Md=FW1>?>pa>4E#k(Gs<^oc+1PZ8N16fN=wp54FANlzWFAaH=&b{ zfQAnN$J&Hh3yED}MWOIH7)ogV@}!cEsZ;SyN(m5WYD~`QDI`rOS`C|IRmP8uznuy3 z6YU4j3nT_Wj2)#Thq^tT0U!@=r>Blx9f|3`@u^wA`q~sTeE7h|h2DfqiUHkf@F7ED zuYDvW)BRyvr)4E^ilw7Jav_Gs7aQ@|s+U+3X3)W3FWt2JrdKY!z4Sq+^g^o5V&0dV z1qHkqhFbheojd#ItY@|lQRzNyUi9L?d3B#|Oz?MU#uKs^g5D++Bss#_E~hJT&JrXc zz?^emMMC_0k@h`{lHJLW=t%Jn&Ha_?_9*|MfFDXLc--MM6MEpA;3i*GXw={t1haxc zP`O~@;Da)-23idkDiZUq^f)0+6fq@S=PW6PuYLV{sqOpMudQ0PYG8bpASTE6ZY)hl zG*aHwjnBOO%*LsCJTs=3HujEB7KN<%fvc8PNnxb6k3uS-^=bnQO7TWH*Hy)gvgG8l z85Q}%i&JB8E8I|<5bHDvy5v-s&E`r=ju8y8&IB#)g!{#$77yo#OK1lAl0AaH(6h4> z(VSQ$yN2aB^90#@%0m!-u!JJq(ht2_FagGX;(L(h1it7V^eiZib?`=sRIu_INiKC4V|*i)2yOAx9uOS);1I@Ox3+wfauYF3K4 zOuA;4)LOn_QC(VE-J%WUtrDkDYIq@X0)YDCI7@<^#YJY=;(>PkSyL*zZ_nWm%{ET# zC5_}x+2RxIQr_V`A6&?+38kflYBDbn563}g9u_;~*cxbq6e@C1CRBO&B}a9MFmZHg z>&!U}3RApc!IDO{B7B9g^xk`|r1yg^5$eF`>Vbc3h|%r%WXnmGaS946*%m{#AHL;7 z=?R!_dYl?{EfP$pnC0-+&-WUwd!@fx$VwEwO6D^=?VyBEslcEkgpa6}lN3z`4yHZX z0PJK?bdvJ0Fj_W+No&{9n%>9*>{puinPiN$s+-au%71qGl-(Z(C}l zy-X=>xb4;D(X;8Ib!?q{o3`-fx)3Rmbs0h!^KMx*b`G$h3KiVGf3^t&K3Le`N(YJq z`T??m-Xc>Hm9neQeEFW!XjHi*jq+ootM5tgo!)c20)egr?CPwRuUfLyNo8iMvLbTl z7wD>#prGjauD7x7YW3UykBu=V=6-d>2Mvl# zTMd@Tw#(HL(Xa4!u(TMqUOM{n)hmcjWIp^F%XAv5s*(Aoy|L%plHZjaTRM->L;jn( z(Yu2hvm0`_bA)sevFNaIg4T5+6&Jg&Yy|O_8v!qQUC|6pyf#nEG;`oi7ov(2?tsOx zW$u{H1LI1Mvb{(D%T}Up@bb~XA}v#AsS~tIo6y!hUe3Hpod>3stXub!RwUgIXogZk z%z6oQ`n9kwl4ZuhA>I2=`@QF9hzRu%%$g3QTQ>nzmM@SQ5=@t%DGc~QxEVaeP4Jqc zE{Alb9FSjsl+J($zLMM^QvCIE_uhN%b>{Eb2iB!!>8wMCW-XNs%-qH6SFXIC z3q3(Y{R#O1|M$bvH>XTjkfI*9XHkN54q(mprAzIAYmU6KiOt`%2|=Delpg<6>)oYM zq5=0I!8m-lQR)EeDAT#pyIcQs9D(S9f?ZOoh&EIM?{pHpqp#BEz&v%nL&nrW6Gbh|z9nE=Zz&d4Rf@@`|1|q{5LbefQW~ z(y@Na-`H2D*4*%?Z7cqGjog2Fym_fl%A@S)Jyb3{)5Cj6+>5ufz_Gs;=VK3ci$ultSBF&OH3*5JvSrRY&ov&|RRcDKAZ z(cw&Ty~QfLtM*D4J5(^?V^3o8Thg=GgEmxl+BF8F4JW{^@$+qnKJ#x0Zx>;LPPL%3 zDdoN=vwA^5&Z75q_c;@~T)1b`pb6d5zaIJc$>lpxad^4*pst56UgwNs`X^hT+WSqu4jr1Y{0Y7^+WF+oE2$aU?qR7TA!Y3_<4M?r;FMCY> z>^ypYr$&JXSqv) zJkOTO`5Ya&wv_O*k&sroHp^$Wtud4XmQ7u&@r=;Yy;MG736DQB|-Wj=&+b6p7iRe>0zW&L)D!&`j4@G&%F8+)rOvC}XxURy=?4n#mJfM>!i*&PxL}F-W zkK9IO;HJ||)yaiLUj5NCL14o|7!omTpTvmD-|p^AUS5hQg_f_|cA5JFKL-naH`m7n zI=RB=4=O-BzC3o)xxBqV0Xqb!Tu66N_d)rAQ6f+M;=QQ_1*y{N7hRv__Fq%6 zbo;TFUW#~VpBOGkZ9AD-z}0_ob4dyNou+y3yBady!b zsk!m-lN*MHO8omWr)7?;DG;?sk|%t|#pff(gj0?OGPsDT8jDC;_neTvuR;&>6WRxhYVu;z}Q4(tjcOss|yB*Dg8?( z$7qdB>%TlPefo(nCH$-!{@qcKb>@6!)v8ydFK_+LNon%-`Kw;x3K}$`)|2TElxOd4 znm1NGzMq5F+ilxb_8P59T@woAsifhZH^I;PSC4-=bhbE?ZX%tNzIxlhm1xPGGD9ey)#?$3zhFH_?bxWu38Tp`)Pc?nRWaOu>(v7H@ zlDf9o9vj%k|G|rRTJ#G<8O$^XX>W<(?povI(@G+4a&HDuP4}|f?kLjO$)v~`g&X*S zz!hZRIEaPq;YHFl4|uw~M=0fi$Bt7-bx&?hoe~UINb3*u)8{@Rbbc6V9X8E&&~9{n*uB*L8l|I+P0y*hf| zNK4U>ZwhW$9hk9v`s9A;<}&=58;4Mm8R~;!)xYHW6)Fhbu&aL56A>mLqh-iT)S*Hi zVh9wVw0xuvlQ9-lBDsDgKH@D7cZu={LF`@K&_guDLmGUhP(n_=q-cY(TUG*b23?^S5*O33rKQWp`|kc5{)N;`2O~X&znq+_Ev|3VnupxP#M8lT)F{tXa(Ls#n=<(4Vni86uEij zxr*|XIyD@2Vjt;y08EWu4f$gMAVxChP$i+o2Wl3vT ze{-rKhD#EJ@$K`FxbsVGu2WcMOEg|m@UuFOGA&o#{-?NP{RjMKe8)2bxiy?IQ7L@~ zEfdOxcE*?_JT62j^u$+(_uY>$)saQ&N+fmRWYqgDRx#?5Qhg_K4@cvaa~1tzS?^#< zW`Xyt7j(Wa8^}hmNx-38$$rhAWADKLBXMvj6bUJf)Gkm>Ad7i46SLo^49e>yI{B2* zb1>K990uf+PH-K6bk+q9Dnu<+IR{;@1H7{%dPl))ptQ$`M*zGUTr;9ez`u}u>kM>G zdt?g*8%I+e)b4ngzX&&rURUgJB1?hOLAO9)H9pXprr|v~f`#QgMR(BzNda6c;P(@r z03L%p=H<{f(h)kKOoh=j`b@ino(y9E)c&-jn&BEcOpjEmQv41l;wO9}o`;I#a@++C zlTUGFbVU%HM*z_j)J`r69t!#tAQWWU3>5J`RR9)gdB0CAhvqY&gwCAycq!YK3^4~= zgvuc}i__2?MdiRTvCB_ZqTYCjI#r4M&?vJKP&BlM1bzo!Ovr*hl!mHR9HfHCSApxH z_%)>}6=iY?K;_1Ud`+soz)RIq6(jc}KB$j;D-mGp)GFlBi{i77)ILjGfMX*QP^lu7 z&l(5Uruqbjqf|dOC42C;y!70*CHgVZ)g10+)+;q3rPx=LC^ij82I1Ce|5%%_=(-gn zxbM_f6&oKe&TDW)Mnrz=9GeeJT~4&Bm2rjyl}4ACISiqiVXrP|R(u;|{6mGadqmF3^XjRN+iBC;*8a(j{I;}cU z@07mRjC2VJi8lAJ)Hr=VmtN#c3XOwZh76tEVRBtO>l&%?SQ8V{lltr9QoY8)prCou z(8rpVof99&zo$0yyxyFi#bTw_FYdbQi@S>F%w;NV(uQP>AWGk<0n_p}Cn%M=l&#W1 zQ?F8^1u*a8faiGcX6C%>K4w4c0nm)O${1f#2u;08%PBRg8040<3Uf<^7?%ksjlYiN zigUAK)MicZBsK!MG5oz&H;Abliwno-ox*RPpL%?X(#a)jVzRVWpmSMAb2e^;|)N>Gz+l?B(pIZGYpz!&J^?7uV3IA#fDWGz5!-lJEpLB;|`NorHQjTszjmC z-ebKXp;DtqKHLSOI69@rx=>|QXD6fq?ta z-5z8G>m>ry0eLfV$5^$`?5;@f6{yy5`LRZHqQn?YqRFDyXcJv_HU9u$kEVOCO|l9r zGPd;AyA6iW43kmImagUdZ_S_Xj!Uu#)}(89BpZ5f$xs?i(<{xDYZnP<%WLNGe%~&u zMWwcF>dSGPjxSq&{P^-^k`Em*VFd=2jvv(TNui+u&2AetQZ#Ze^;sFGR$5FqCvh8{ z`du#s^Pjs_ZwGu6VGOC*xC{(QwLV`|1K0^SVH%s+ssr4bxwJx~&e7|W($FlC%?8uJ z6}p(fyy8F|$MyZ7qGWMd(e^1woB-f1t5c`f)%Qzz-EQBPpX%Uwdt%=(%Pp?*dDze) z=s&SGi-0^1XD9X9Sv)Tgqgz>RGUTK9NQ_N9Lq83GlELp9$zvM%ysz-gU@o*P>@ot8 zBvrYXgP*h~k1U+C^6S?vCHzG9{bO7&w3J&?jaj zO`h0T?TZV?l6?;3_||BI3Sl44qHHcOwkQ$U=jhB-M2LSD|0j}cLI< z(l?ECuyNw1O%tPQd(WNgxDj3x#L3bUEsH+V89N2YUfIe7UX1~7qNg`14158Zng(zOWHZZB`0%GAORjEQ%lLEDZf_T|T3sl8!I;#U` zLC?`F!N%B3r}6U1%@mY$MVS)1%M?`#QxHb|q%`cV#bNea923nMVrzz3v?}Ns3Lcz1d|VaGZ6{zYv(1C0 z+pqM%ZPX1Mi9n&bNM3gq;|L#;TA-r{g+kJ|O$amzg;)r_FfI5sH8n9)NDQ}1jp0aZ zYk2S8a4Y8yvu1fU+MIZv9M{m5?SZ7OAgFjHo=>Bx?N1NlS0B$s*YYK&MZ+^&$qq(y;2J`Akhi`c2ew>|nRVJ|Sf!+aP6 z1uA_3C6dCF3pjd}fa9HiZMXut9k>Xpb%|a}7jksHyp5k|E3{*c{y2Oi_|PAG zh`OFh4RBc&G$TqC@@WrJis+;irPD*bRt2ROlCzhji^!QyY1+f=I%C1(1tSq(+8Eti zlHSo+GH4`rLZ(DJcgdJa%=4rhKoU48cD#7g_!Jcr?WTl_Jqf3{>OxY?6EV_v%-xQT zUBX^UPkbEd+B+0ok7kMsTAXo&M~7hU^b)=q#~N`GGPzUHO7LiUnVon@I@HOJ-Z=_6 zDirXC>;@!6f{D&`N1+2C+EK9_`LL3i+Z(_!_!&XEfd~XsfPsT%7pdMLl?I|2w}EMg zTKqJ4TXlP~Q?0%AR;}8pcRBf(9XpU=*4aMi(;@xluMTYQmB9vauS}aUf6bctGp6Ou zPE1_?*wn17sgJFn!PktbDh-XS0y`;{vcC6PhqjmsMA(v`xE#REiM-7hCt#Y66{;ft@pA0iz} zSjM^~tb=&Orj}C=FhH${=v%+Jm=XiYNEry&a0^Th zBfXyf>(lt}6&c)%y(v8>eTO@|xAJyoIC4Z9vg7-^8t;(adGcQAk0)o`^A)eWqB?S) zQ*`rc;4Q@;&B8y9Oe4?x%k#91=@+#jfR9jyt@?H-ORah#q_>7ARkh39fB@D3W3KC1 zv&<;a&PF<|bGI<`^2w7}d9$oZp~+O} zUY+{il&BYt2mU@3DjYROmt#gF2W44BEOhDDq81nEf`JhYWw1aXHH381y+hdo+Nrn* zGQlg@BZi7}u929YwicQ7X-uy$NOoFff3r_rJJrtqMjMfes@&YFTw(Xb8~1JAcjLtB zCDUgMmLV2l_Vgvy?TV}I6+)DKArj)lxMkb-GKVQIL>(R~uayoQSSqiWaPQozjwvmWi`5;Z$A2@%HvTz`RJQFbywZnQ^%PNos)tAUBF@Ka(SRW84X)B!CJ#z22<*6 zFILV6JQ&l^M}Q6(c)JH(8`__uVljNax%qswO+r-n#_nxVZllNzLw7H&?od=O-96Om zbXsXk=-Lv)$T_oU?p$e+)PA|jkP`P`MC@VW<$aO9N$Vf_Zu92v9$KHI@}zrIS8hh> zCproGM>Y@@;Nkzjs$nMc*boqi&}q(}iu(OxwOTtA8vYwi|HV6pd_H97;{N}6O{&Vv z+WKw$`|0(`$?H%5eIwCdqWzc4PO((~o43=5~p6-pOh*OVS)S?o$2~{+?jdTqg(ywmH0_V zD%`WDkb2Y=@4*P`b`9v^k4Q=o4#_!czsI0fAd?iXC@_o9#e0#hy+pL-V29`mXdqPPkfAXtkqjNQ(vnVrWf-TBTXy%VpThV+J86Ln zRRp#Xoy1s_v=%@m47R+Ohj8Q$<>ge#i&R$ZM_w6-#oGB=d2fN=puxe)0#QAxvb3tt z?34ue^qu+z%BH$Vc+`C9wIREv=|ts@$wfJXgfPG%Cg$}+WMsYTKKgCVO_kpDSCH5n z*DH-ZoYw0H+U>qBy;99p<%HK14i#CrAf-58b<^}83QMISvAK0k%SW;FnwhQBcCpDD z?E`46QTr&Aji3|xKw?*rVpx`w@f!#AEj1H04z&!L1u};mB|_q9*O}dIf%q}x+2Err znV;|_NIW5zU}}w{6RO-*6RHmRLV;Rx#SL)}rWC7&h}cK_-4AbHnrwAW+coDF^$^2# zBO-Nu7op@XQJ@X$hVgiuNT$^GE*c)VO9#;?@nOf$#J9K zcAdcO&UtQNnXqe`S-EqLWJu4H<`178%;gmQ$ILyD!XBEoODLoI%RG#1>xFj%ydpNI*<~C9GFl(tM$4k0N>uX1e^R$82$DfY?lLM-#^|M8<&5`68_?lI zW}+zONRW(_aFD}MYD}OJQ}BB<$_SQq*+!ufh5XaUDxBptqSQY3z=64ovj&epFgGWg zTZWn7!2B`N{S$6Fe9V^`4k@*!YL~GJViIz;0siMG!tc|X;FCr^q9f8_xFK39z z5-I2WGH22Jku|J7vluFZ*S4ooyO$OX$ni<9gm>i!MAz~GJ}qp4=EO~Pa}SvReqe57 zdczL;XeamLz`=%~C#On#NLyEMNr9EkdUd?r>nI3mnhinTd_i3sNUt)y6hfHK+!rb` zXLcy8qjdwaxZ47?>pc0=yE*06Id8mCouwWT$QWb>#q8{RvOJh3vil}EG_c8|{0VqtyR!Zfb$ zil#aV30s_eQu;?G-UNINjDl>lDw0u-0?ouQGHIr^Rfa<9+R@KVF55$ zL9={*3VN0oWRD^8lK`fee&v8#z7vuJ@%hSBp1jjjG5tlyuC>Q18Vqs$7|RH0l1ZNm zcn$F|c17tRF2fKn^08NkuC~t5i_27NCz>~nt>0*?pJm%vf6W%dgjK3*wLwQ-N`Bm& z1EmF$*nf1suS|32`aPO5UtWmc96wD{?#r#>m#GBxbaj!3do&}3wU^WuVW_?y8pI2s zTz{EnS^NRM;*w%=E!$ICnC)O6Cb%YU*N&b)YlL(syKls-rDL@>OpHyH6sk;-CEeXEy{d`^M~UA#LiWpps$zpKvy!{UCw86PWiw7no zP1=|^!8E%nQV=DC`{xYobKtLT=B9rU^MRz0!mkt$p_Ww?B37WOaq4@$`j(`Z(L4|u z7aU$2XykeahldZ(`+yr@AFJ9n>AhtOq}`zrQ8GB^mQ*fv?g2RGft&C8cD51mja~(1 zv7Mp-OGapv@?00KVgP|-Q5U9UB8o&0sS$u?X_TP|8;v#u+1bLLF4)iOV(`qOG z_+Z!c5$&Z+J^^45xIOwhq5%T9hKM7@C1MbZ>b|+VoTKeK8Y0u@9{9WYz}&h`iDnS0 z1p9#HPkMre!2^Q@b)ZdE4>-K`c(s1Bwkij^n>C^KO7(@AnH4X9D%FNwGE}8QZ=0Ak zKsVaD%RDF}FhZSG{l*(P)#W+TyZN4VwE=#$v*Ot4NfV^|$IL$frkh)qoiq2q_`z9= zi4aTeVofm3b?k6OJ{xI^&#BsGGG$s4rH^Pm&BYomHehAXa>Pbf3|N%&CFdmlC=^Bp zZ+30l--!od%UJJtpe*)(UenI&eMUaJ{~-y3b3542idFMO!6?b2KL*5!Ij$J_G7Sr+|rgT<=t zsL<=Q<``~>G#0^__eLIyF>AF3{@EC_HF6;~L6xdO(3hF2gbH=ySZWa2+&dbFKp^3e zwTe+xxh{U56e!Uk5YTuaB}C^z2aFt77)hW|=r)j$!9=k1^^Cgqj;cXLuOmT+^`K4t z++l9Xd(sZG!DMC& zq&w(71cMWseA~_!yk3%~qR#;naQ4Kj;5Z<%w`pUifwy#_ugmdESS=N;VdElD$UO9S3EG< z^u$wyF14y!M7QiyqR!sd&7JEVJjVu68>}5{r%k;7QkgHVkQADXZ z8=k=_bYU2mRIwLu>Hpw%&){~rumKQyKkbyHtNsA`x-_(n6?TPamdyb`avHBdMaWsO zt54Qu4p-qWPhP7B zf;c!c(gu=82Sjrs^=VKnkxz(6PJYhqfFn&1ZtFo|V{lk7IIP3JxOp-Dg$;}AhA&y% z+%e$T(q+f){QQ`(@z}DZ$FR}yvGhOBT=(|cwQpbd41cdAAGJjgY=W z7F48EVCw|7KC4`_@Q`%j@Rl#?a!2Y$yX(H(a#*@>XrZP&i!IpCZu?U!yMarHK0e6N z(~Bq3GZ!yrav56W2OndfA3OH>F)5v`W5%`T+s>~Qbc+^_KlJwUrEeab1kY#e#%sW1 z1)*?#;Vn+n&4y`=>8%LZ6ul2fRa=XEk^i@E2CN;a!ad zLb7BsK+ZYv2%?eA~Kv}WS~~$IVP{89HcxWKO`4m{y;*=fr#%bZI^yvS|Imm zr2~&|+VuD)mZcZ;>Dm6JFV!%e%N3J6Cb{2B()Y<@u$s(tgI-N9 zYAPLnm)GYB<)v}Ukzx7_?)1Z%r`X|56DMriG+|=o?u6{LUY@ub`ylx)dY7v|{EuBO zy=x5J&t4Pf>6Mn9U~?HP@q!^W-hrIw@fL$io(saV-c6`NQhcNa(eFK6<(5t8fviTe2ViJK=*+{_BKX?>ElzO@@yBqSvF zNz*#g`_dQso>?*!OO31{6cAu<(q3FiE&KoQp620ZwB10gn54_f5&eGl37agIM_uR9RZ^068 zmiYOw@^LW?KR)u|lLbf_jS&FekOCpqT;|9%GQOuQbSsl8$8G;idiH?_rDs3iJ|VBZkLUMlL=mwS2y9+vhCwAg2mVXn)s30E_tpJkl$y z*fSu%FhyERIvs|x90U!RMSV_0WD!gih+;(WMJf=%Jaz-H^c2Xf2DK-8TR^l&9k}3@ za?<-kgq;!0Yef+X4#trn3C^E&f>#~#I zcUa#^@*U$?-+p$_eD}hN*#47Q==?rw`4Z20{bwrngkfNxc=j4&JIW*9d1i5sSO+*FW&%vPA*H>)gG#i^0hLJ*21Q<1YGUj9u$uxPlPzLa=~j;p(&6w0j|L+ zS^q(P!zq4BFh?|wXqPN68A-trBv@WZOt~0*LGpUX%neqUQlCHr0C5Y_z0Fa9fobB% z!=ooNa|I*AKjMjt_oWnoH<+YZzIDfBUOJ{)wRz_x?uOZXVw|AwGx)7Q(WgKmaY(sufE+i9hOTeI~Wzvk|}?8NQ&OYpx(+-~s6w>BC6< z76Z3v6RTLE#1*I8Xj~zV5_+VUWov?40ZdQ`)3ig zD>3e{*bD1=6;7)0mX&HCJ~?{D_r2%3!Ka(|&r8Tu_sbqTJ;Au=dIpjraHH>dSNigj zf@NRW#740JEOVmt7Xxn|v4qS1U0*eLL?(_%RXOvtPxs3lS_1FKLO&<;PUBP-y_%mq zLRXfVTr)E;{?$`HU;V(7Y}}%u(md(;^_LVM+&8V0#-aY0&r)I0R}c{s$Y&EKQGjz| zFc4@EU|0#>8?duTKq@c*n$yrK2BItHr(uKi#^;YecUbyrX6-eCa82z@W;^`c@zv7n z_aqq}kbe8=R^qWALW^|ox{6UHZ0e_fW>ZV+E3cF8L%B&lG2y*^3onlV>?GAh z6;vKl>Hz=(uK@)_A<5SwXz?m}ivrRK(C1|69|uod5tMf1oQo@D2Uq6FA=L|rV*7?a z-aPI80(N)FXVSS7Pu=tBU0-LLC%njPkN=|rsYT;lM#ZIvLbFHb)y}A%J8J&k)vpdH zy!gVDF-vb*^H|PQc7c0WeD|i^f8fTJra!*Haxu&~K& zd3Uj4$PD=Lq^=Jk;J18h({2%8Y6Ds~_sB6=z^7_BUrp?G6 zT%8{iUzO1R?6G4n4fFL1>0@-x+sQbsIx~uaN~w| zd9+gKA|&h41|$UX>Y>0*d5PJCqE~_#2Nb#j&t^)>Yal@%pFk=(qQm9f+!=92Mh841 zSWLm`=&O{olfYx_X7odvtfHF`HL0~aU!x5w1^AiMGf)EHb%IKE6_qZg`_Vx>e6@1% z-b2TZAG~?d;_{3bp{P(~mc)XYQ^T8g-?Sw>MX5E$*wZ9?RfRp#Y}9JXt3<8Q#97o; zRVJ53uT)i5T3iY2#hmOBb?B0DEpqtnIf zHLAHY!Z&Z(kYEAn({H@z&V$$Ml#9zlp^B!ay|cz7s?~{%A2(p_%&EmCB|(%};H_S6 zq+DWcS(Rwwj0TmqvdWZX5vwZAu7trW7S0(_H(^5E$k`rMg4vWftv{>hwl~f?w|Czg zCS5_Hn&*`_&6-g?ux?O;G_7CF)(0oQuxsbeKnjQS=W5Yucy7%YzsSdmLWT!Ev3+G(b#j%Fj>TBSu>f^ zpw__F0smj++=867(&hxO&!GQv`Y@|iXYj4uzI)T`@{)$@R_&ZtU{4vVwD&FQYmwg1 z8n^EB%;|Sbsf>#>R#(-GavA!}UQpRrsZ6q(f+PCnmycgQv6sdOggjw+{)1!E-!je1 zukU5hTC;C;s5Cr)iK5A3InI=)RK>7+lB)_bbh=jWP@7HX=rcB5nOA?)_)$A2*7Qo$ zaO*4G0nXta8BFNAV*bedf|`lLQzA#lGi!P#y-z zl9w(wls=@q58ZI?bE1^#wBlgX7XKVt@AV>*=n26tghev}h|K z49Acbsu>qTZYYI_ssb#nyBT=J<#h&UrmM7CxM&D##>LSSBX0?cmY>wwAlHA`)f=OXtB?`4oRisQZ4=|BwuRxG^w2{Z{!MGYh`{_h${bV>?josn9j zE%O13HdTA$f7dKrUr7PbWp}i_aX0z4k>3ABV~{Kz<$04j=?Dpb;8r?+FhzHU z-72GEc6M{Q9QHYionTo|*EUFRa|#+Hd(T-CE%&e%V`MQsn!8EJj~<3v{KOC(JGYlk zTS+PlJll(L@ke=%@=}~dR0Y*tAx}4P1V41{3Y zb3@UnR7HAX#~FtDqpEy}jiG8i15RE?NGR0)(x9MQ3GA`4H;@>?i%F*Q6un*M8VW`$=60JJjrr3({3V6f+6E?_ zXIK%zv(tMgdB_cUh$2^v;LFJ&wo?b(l~JYZ7aDC@IueOP0qa<er^N)+%bc*@!y_d=@)A1hV&Y`*M#|WlEr?!!7C(z4)c>-EE zpq9Zhrvcs%0%=!;NKYN`75gBWmy6Ja!2^<^UM_akntdtFmX5r6)5ft0u{j5?%`6>I z_8Ob^=9_E;Rk*tL1*t8+QZ&X2yojLM7*3UE?-lFP9eL!k$%uQTM~$PkXW<=RUElQT z;DW~SBP!~LDB9cdLiEuuqtzg9Xc{ra;Tr)D(_ z8f{rHH1A@gRZ519o0R9v4Ahw=+5h5r*Q^hr$K^pAYa45O%)_JW!dBpq#2?hMh1s_ zNS)-d1Kf}l;-q2RVAu!lE@1XRlIuK=%E9l9sZEZXH!m)^HfD0b9gq&V#`}VRPuER2}!z+-;9AM#K$N(^$dr~Cf#Vz za2h}+P~E4?x|v+~@r{7BhipAjgAC%wWFrj7Ir%bpVMBI`Q1V6Rmv&2a(w_6W!t!PHqx-(kdM)E)4Q#Px zP-b~U!`iXZL$g`dAA66kU)FZV*tHD}#*n6!@*Q>d?xtGqR)#);Cnba`p7RTDL z4Q1sG+(W%5$K@2jXmcy{0MJ0?lQJ~u#~R3rEIzM7x^I# zQlrkL(`qx)(=)VMZL%)2K%*(RKo1+c7JY+ElPhpPBBke;u550~+o(>)t6n8i#jmf8nW1XBHhB>5lJLC~XT4=89`r<8QxX zqo(%VG->F%p(XKvpA?60yrrwZ%D(kcH2MUE0zD1Ak!E1(kZ^knV785N)rA@bqOc%O zP!I=&sVE@{{0sZsTw|meq5(^x*bM>FMr&&o+{dHyl3e#>)E@J@7ph2zpCI6rl)!;} zbZJoGMHSW{k6`f>o*oHDoqQ^Sg`fw6_kl9+{lVYw+IM01=shnk-1Oy;KP;4Pf8|%w z`){vX_crtW>O5O4g}6tS!BGCqqg|HrN0IE}_;t7Y8@Ic&W3<^nELwHL?hAVtzPM-f z>iO5*)3WYu>3vWS+~OUsT566+u-JE**QM{jl$JF!1d)`aqi?&xr?lc75>`tm9zoE< z{APq=n1Sfb#C?%N6Zo-hk325iZrd06icOGWI__c90jj(4mX42>@#7+Kjgvd>V#B%h z9UpOM3VF^}hM^NAd+v4UC~`(}NOzE4kg^8SU36W<8;LqX;upt~5M_!Mid`J8y?hPsg=j2!n+uy7P56f~wevR;29`yHc6Wcp z7?p{+Jy{-iw$DD)WbUgnRVP?#tmy^Jq>2%{&!hX8T1}V#BPJFihc&5%`_^P?;+n9K zze*Ja{BAR*{=e$p13ZrE>KosCXJ&hocD1XnRa^D8+FcdfvYO>?%e`AxSrw~V#f@Tt zu?;rW*bdEw&|3&4)Iba*Ku9Pdv_L|PA%!HAkP5cO-|x(fY}t^!$@f0r^MC%fcIM8V z+veVL&pr3tQ@lQ(H{B5hU3cf}4x7V@V;L~v)I?6_*wq6t@dtRqF(&Zxdh`_-87jFo zg{9(bQc^a6km*oxBtb82j0+|3Gt$9d#X?J%2b?W%t;(wOlfeAIqtZ25;A4nbqKVe@ z8qq%asL^OLI8WZ5S?G*P@uv8q)`9n^>;UDX_ULuK%KXB_tZ0`vF~1;IzRt6IISK77 z-|gv)Eyz#wx}viZ3-c>|-7zgy^wCu`W4o?X0{{rKZ1(}3OoJ%xgbRfJ&Tt)B>$;bt~Ya)oH02^A> z?zHL{FI=YWUC4L_u%Zs96<+WowQSBTzrv!*aGs7Lwv$2y=zHr!2B#q>)@n^jG<&zc ze%{XG;hsiMezkXY7Y&E#ncsi?kFPxOhr2$1aeo!7dhU;Gm3R31ubRC%u~1x$o<2R= z8k`#4%yc`wIbK)1ExM;C+7=&Q70n)*)D%-t6q_iRE0U+rIPYg$_ijm?=dI57%-;XT z{{DGazWCW)*MH=B>?8TP-^D$-<^HQvZBbL>I~nhcugb8+Us*55zK~{%u8P0)+2_6; zKQ$`angE(21O97%3H)Kw^?{5e3Q?J>K!-R4#1|JrMzTtP{cS}&H-*?hL0I&l<9B)i z6o@xu<10Ov6^e?+7tRS`%uDbl8>L@f`0%!E4`2B4(2c2kKkj|(ycU=)HYFA;TE8$q z!RSrw$;uu&5M2;nyJlvhWBAIBoSaoVU)Z|&#fw(@lk>v)QC#ne4`vi5x*f|iGwWM( z&Hnlem(96g&CKF7mzmpEY}>YC<+g1 z-E18(f+jMBv@km*uT?$Ws`}>>XgO8h2Io!Cra!F>uk%$gXCXL2%;_N?C)hp_*NI3p zLO*9c^P;nL+SwtN{ng&RU&-&_%08v`D05%sR4GB}+=id{&fc$1=bESTv%dZrXyY0B zl{^}LttWv8RCRvzoLD`v1a|b__0`w<=ggRC@<{)xcgob>IE|eDZEy5ZXQ)H;UvvRJ zdjbx$K;{Ty_n9R3hq1t>(ZxW(1Ldb;KSs(Ir|$s|xUMuAwG~zi!?c^=p=Xxp=9N5eEhR^|KX^olF;(A#aC4bl_-Q$^6);{6eB9CdQM8S1*_Np2I_X^o_%P!ZYABl3X2mGHCDR>zQW zM&Suv;SA%DgXBtCBtD({cutV6nQ`n0z7>Datx)gle30qL!MpT$DK7KGg=;Q}xGrCL zhbpgr$I8oHkxSNCrWGK9?4#dNFioHy99v&Fd2%5?fZ)kv93s_6;?u<(n9`0*t40`| zB(GDt>P$EW@i}5Ty~yEd;=6Jidwh96CF)-;PiHsfms7YL@Sh4?@@vou0_@DgLsq&# zhhK2HffFY(<(4WC=bWG-{d9<+MByX3&V*<_x!eGAnboY! zVK$59QoQ{50z>REr`aUTlM(s=hgAsum~KePrdLx~Ny(-!FvJ~G-=7XqIVNI9;pqII z$6`h} zUU)nZq6Cr^WSIYowj~UDC{{Lwnfvzd-?yE;CcnZ0a`CA(tXe+0Mt6$8THSy5Gk<^P z?*8iW0Q+#?e&O={`%X5q*H{4mUmH89JGBO)3O_&wHUI?r!jI1{DLMbgtO5wHLJg~P zGaEJlV5LoKmoBp`3*P!%#3>-bN!W00}QqoFh(U5 z_I3)fCvSpLkO+H)?~@-H`}}!1@Vqe~6-Nv>$hb*}RUVB()kzcIXv>RX!ILKas?#Y8)jb>rWA^~=6v($U zWv7;bzCwQyw=J5D9yuaR>)f;J%XMt|KlfcEXDhZ1Mq5|NV~=fprP4LWRr$)+$KUT=ltlgu{Ty{aMm#cPR0)3*R$@YWTsR5O zIA6&3uq7mxJGM^9vKoEz&eva;clwN0t5JN%h%MXW@_N4KSGXKsT6H43YU$D{@tvxr ze8cFd?$owzGFd;+so|5iQjSx)d+x!UG@i&t8RFUl2M)N;WFt$Gv>s#A2-r`dRf$Bi z>AxOF>X6ofSS6jCQVeH>63_Bk5f4s)J_ddop~SgAl^4$0uxL_c;p{9-qi0y?N@4$dG>VPyZ;IP+7B1L zH0+AXb|$CfMJ`#pILf$q_uUtd_-ge+T1HGIX8whfFFttPFP~?DOJ@u`aOZFC{&3Uc z#a=jNOyaR{(}54sc%S$VvZg_HCpz$Th0GxOa8#?DCEGdhE2#WZ5~D0D1?v+*oGL@y z5~4St@wFK#p0gJL8!tbqFgW?1{-==hxP0QN{{E++Ft;7OwL)25*Re+~}0H_}6{CX*0oRXs#@+*Y&tIGCWw(8|;cD7%( z`BrA!|Gm`Zm6GqX`1)k_`wVMT-pgz#XJ2RMzOIw+u3x!l?^F9u>>b`S`DOn1hN7`w zU@^4~_>H@!av%5N}n6I9m zvS)bjSNp!dZ_o1HYhK1z(VlUf-X{s&m6#W&542T6n!zXlB-zx%Zsmv@<^mME79>ML zJ3cXrLWL~$buQ;TKC1C5o*G0`w)>7%&%^hp`% zPFq|?O75ft_f)HXp&{OU^dVM<;wBa=KYGqq1O1V8N|07y+)a?xn6F!hKB9F>;pTuu zgG6>AWXypxT=3$F|H{5PfuwtsIfqT6p!g_fblgBT7%}xo@&{5J>HaLZjs@h9%YqV%e4vbA=;aBYfUvbgnw@=pZFuUNz%ud1nDwW_*iEIp78 zsneHMX_ zOssGM6bn=xAm$numq;aA5H6YM&=B$gPUVSqYj_0A35IkspBaRNOlh)^@*l)_*+1`L z!t%(vaBx-6*t5)Kf5+~Ue^q9Vmj4#xvhjRVG@E003zJT~Ab(+ZyY0;SBD;<`5~t*q z`YYmL8HL&7%l&ydRY_6&al}`hiH{qPhcZr+qvu&HZRLV_`A)#~k&iZ*wwh>!m-}4xID_ zG^|!*hXR=*3CtZ5mh)o)CdLgc0m4fdEPG&&LCBw^P{FgO_mH~-?9zsr#KP#mvO2hc zvxrHAjG%kK*wcGJjUx&SASDKl6_f~UxKWN0g>ATjcg2IUFv4DDhIegjnoVz(j4U&g z86~scmKM9#o8d5-jErZ*FY~#vuc(+mH7P|el=%H6I9dNlEq>- zCKQOK&1)^5DOO{2RMC>MI;)}kUHOZ5ySHYo%3v(oXq_V50rfescC*N3;p{hNyS_($ z<_6j1L5esaFF)`iMXdS*)BRx;MfGCI`>FhUYz4v5ql z6V~H?*!H|}6V`n|7DZcb6R+jmIa+B5D*-w%hIi}vUr*BND`6?@Q1GX~hzUw=5E#tG_8d-|q?Y7r{^tJ9yvIzVGg7UAc>DpVJI{$37J zKpTy)c84=_2JI+igw)j%EJDmdjF=*-sZBi{Y5Ne1L-ndKJ{HihqBxqi+G{X96iGlL z|G{@8Be)RJB-ucc0UeJ}_x-rqMQFffI}}py(;M-K+BG>`$TJwnFg_$_(V_dU zLeDGQZ8H51d)NtVcac%BMhudDsp>4h$Wvc*%4@ zB_<3{JjklBxfQ`oWI|$avv5WXcfRUy;5Gb@BO}I239C$V8ZsbNLdEKfQiTN%)(V`vnnc%4~>T=X>a7EQFGF(W|S5SHevO_?5Ko{=$M%3jD)D{ zgRAvU=plb*cVtH$vDiI7+ZVNeOUnF!A*G?{ysNXPic)d*;@O3vp^l7r;epdB;?oO~ z;?y*vF{5l^s_1`H6|*O@bgGM2bJ)b59V$;XrevjsF4pc`iDl90@lh#JtZh-o>?o5d zYIeq=HqH|^8`4>|x5T!IS#D%eZE=RGdGV8`EsjD9(N1%LIS@VjeEBG)kpFh0{8^hP zJw;8yiZf29$oLm!1Gf?ltM2PuuqZx{B-E7iYs@JhQQXAA2mQw3r&xPZW+JwBFm*)p zlny~C5zSLD`3o7iGvs22^zN_>I^cC4q*_4q(FB3rQ`|0j?2=CMIf5W2Km3toWM!vi zlzI=WCm25bfy1AalAaOtuDWsT+2dnRS<|d{TCMtOTt1GUUVG81S8Zwhs0QwPHSlL2 zl6yOPQ0GZmbFeV0cu8}`dWEfdIH$JCpPo~+ymb<0&)DTuEJ{tY>h-wVK8~Ayeb=g2 z!F@Wz4|c=GODFXP0G$2^7||CBNkB(Kevkr?=O9%lQ26Ma(f}5Hq)bnvvkt6}G@~@5 zCpaQkML$Sj9Q}2!bu^*H27(Y&q1#d!Y^YE4CPuN}&a=hXR_)?K$rrKtYxmE(`Pw)p zdhD|ca$}N`J%-q6Dd`n)9m^K(T@j;qNrGi#Z}EI4NT$cmQqCJos0+Lpu)rd9YxVMb z{q|J3!hW7)oXb7OYd+RTUGx2>y@&KXZBekLD7MHKhskO1B-JlWTi&yNZ=+|0$Eu$k z%}m^J@+>tyP^pl4lir0r`Z&<3I4dJT5Q855Kx$qdKm#EG;>&`pqBlw}67LtCL#LKr zP^n6%fyx4~<*FiG1V-UfAAC0&yp#+mgZ~~%Q{JqsuAZojX+>h9)otd^YNv~T;V|kw zjnyf4Jm%1wlZ@WA+aFxF>u}bxu>V$;T3G1A0dHd{&m$Qi&%i$XYT9{E^}!V4#yOG@ zxn-#*#kEy@H8v^5;jNVaaasPNc}0*Xu$t$x(A-sHcNlC;aGKT_T^V~)Ry}at+B+@{ zjds-~GH+I3hCelX>Y9z~a!p)de>>iD{Mjp9Ci%J+`P&&nMU~C)1Hcf&Ir}!q*G++s zxLxQS5{1Pd?SfIV21sPH1yE61Ks!KUYfG?yMm_;z`P__1pOuD?$VxJ=s`*pE`x!CslJ5wr>oJ+y}lyT%s!BB_805*;dH&79sLC)5WEie6Y2K2gqSDZl`=kM z0*kfyQf4Jw$@R<^E!^f19mUqN^*m>9sQUf1+|tZH#@W+S=f*-K_N$nf%=FprKVRyI zNz0rU^-RQ=91A7V@|>)4p(%P_cE#O=ljT-lo>=ZH&xX9AZ*opnkX1|7Iq3zH*P5qh zW)$#snXJ%ufpGPsoaB|xGLx<#c9?O}`6n}NPQ^}BrYr$x(!G2%> zr!KVMK$Rp|rN>f;J5Bo(?6!P5qU|vT%3c)Pch0badE&A0SC%xadgP)DLtKPqj?|r8 z?o4ln3%Y;A8_*G&Kvo5>0)u2`c_B+7F1@WH1_DY3yFQvf#;ko&!`5i?`K#NYoc!vw zZuhEF-$IndWj?=Jt~XTX2><-lWSdk0{(V+nEIZ#~zf4?zEI*C=4Br)kB`oTJhvkp! zW~`O_65UI;CT1r-cp*$5nG6r}itnyY&N8{3ZmY-W6;2F3Z*!TeoxgF(pZq>$PRf

|iJ)rNwdGr)EOmirSOj@aI>%6ZNkal&y#akd%Z!h9PH=pX zunSE4#rHx6xEAD*#{#Db`j(nTHb$rq( z`SIDCw`IE4UK1Cdl({%QKiRpYvTI-Ol)2E3n83%6*X4lQTMw!im@x|=F;1LfZo~Bi zz8NanVFA(DOnN3USPvw4gNFtrRu0qgkpyHaDRvGISd351$@kpw`x|c>3KfXn$u&2; z`YH>)`XD!_1eR6A#F*dni;b15*+r!}i>5Wk&f1YAUQr*cES(1_$e9xt2lm;#X>q1N z^~f!^j11l7%FB=Wh5XVRZ?du2qN$s&8EW$xAD=en{wJ`EcLpk)nsQzwbcYS z`Gd1Uxu1V+O&I5g%~#~+ly9P;rmZu+8N?k8GcAjx>r1RXidKDjVTGVLT0Jn;=%&b4 z;Rg2DM0S{X%2U^#WXLMY%5+<^EuvA1%GkN&g*j1>MX_d^W76@)P`%T0883Go2a({ALKF?KFD>=KXUSYGYYJ3Q7Tk1Ni}n_TnL=PkP}eZH%SJ7V22 zNmh?T@7kRtc?vyJuFI61o{T@EJ6rOw6X){5n9c#d;0Ek*S7H2tlnGpED3z&Cv;vSa zF%Afdu{fd=#`T$~KS;8SP>%}g=rPh(qP!r9DH^uY8h5@~kzlghqids+!c%8YwPtRg zpBPMh53UQm?!}(WIA2w`YGpXMVoJCwB|bBDQB<7UXm}4v=IzL^PMtF~nB=H+N83#a z)$d57Y|nX>TZ*nWBxEG|@?BYpj>LtRrdlofq=r;Wd8SR0(sQyC60&pBCCQOlX-REJ z(p#*)-3yQ~%bk~!kQr~dvUqFdWm_=^&YauN$6lVGU&EvSYZy4!f`Oz{;h+$3V9B;B zaIj;o02H~N=!ESD}J8h-5^cocoYSL{%o5NvbyP58+$p9d*FRvk~X$=Ub z2Ipk}2>f&XbGS231p}FPi6cOn+?AjyX?&<~CXM`ez-!(c^n%-K7h6Hs)HHe)q>mS?`Y}S4F6yJZNv{ z{?h5q!P@gT)#`PHs~cwK7U`ouDNLH`&)28CXumgfp)=WFNSN)*w59lQ;%<@eNHWB( z;4HB)EeiZSeHrV6mm!lQtzc&11LE9u=UrX1aMP?*^-M*vpV|PLc`fWelWZH9{J`%M zerZ`{23RdQ^CPZ4aQlQG&?DU6o%IWH$X3#vA(W62?Na2jp^HF=uF6HqmHu?hmG#yG z`BM*eOqoC5?w{kg&zn`-ad1+}gKuTIj(s9YpMF3I3a1?EsGAAop5<3l9GX)2z?+#d zNRfO{{>!0F?;Kpc`rtd84l&!onPdH9{rnpK!?DR@lcgVy>BxTpA1z3+&zo7_acD}> zgKuYgKKfj*|Ma*k`|StwY7TWyn=#*>3&|$?{F!x~hbaXr|C3(-$p^0Nw;n8-a=5c< z{yck1;SuJ5q2+fsZ+e$3HamFo7?&?%+qlfOefbl1lTgOs9qiBK}bP zSV!N%Eo;293od`*1>x8KkdwXXWuZBXda7=zaJ%IXKYCJFdh$1!Mt*y1V_f6{$v@*z z-^sD2{Vr+7ijV`Y20{@JRSICq&Z6Yl^wHK%S;Vm{VXvZ4>(mBX$~nkA!t_dmJi_9%^0c(_i*qJt=OiWP z+?zc)Cnq^6=Q}yLPaeN9>tgwx`_Fsx>V+|#7jI6UQl9K9!>`YmT%K5B8@Tw&8Bxhi z;p54R9^BjCYLgqPTdJqFP30rAztuAL>ayZh?V%MJ5PlVBFJa!g$(8b_tHeopS^;G! zq^Nvl&&D<3;D%|wtQE757RN>x)b!L&^0>U*EtunDoy)$wG(BO`vPBh=)dq0!I}c{Z zr5BW~6n|e?R8(2?)#AbAyu9SWkZxNYBoUo{l-2Ltox2TJG9myfNxy{BQ);oi>mE`510-d+FPV88sw+UkSx zY%s4{&0kks-^g4k>kNfQ2g^GvF1zW%#X%hGK+&Mk@9w`utges@Qk28R^sz9avHSDn zlE#U9_&CUpkd#0$3$77pXRdG+A+HS>aAHI;VM6I}830cLF{KlU3}L@sKJW|c1&ytj zU*5WAa%a!}Bgc*%x$P%xMQ?8({;}wDNC>_uHRX~yE3SI}s!5SHlCOAu6Q%288_%T< z&>TfyjLy=t@Bnotz!;F60oD&mrd&BL(<{=?pc4Rg1Y{n)uH-wn&Xhk~a_cKcrp_6C zWOUBdr>}2qwLce}yWFzd9q)&}>f^=s;G|;tJJRyFf%;XWqpRu%;_CAqJSUoyvllx1 zUH}AA53Fm5s9PM$y8v{hG1t?dc1>}O1U%O@ z`h1N(y~$h=A4o6sT(IawV+E^xz*Cty$FjQi(2bJMnqZGHvYerTc|{fdQL{pBABPLm z`V_+@>((5s?YLt_#m^EG@^ayI-(yx(4*81yDu%FC@$8S$Z%8YhNJ zp`~;R4$V~dPG`0O5dH>X04mvw4)m}Lj1BP$Kwj7dAV=`I{a_A|5QCH~2C4)D)EmBn z%7evN71PkL^|n5#skpJSF|bBy8&r!3Er2im7X|g ziAS7ZSqK+sje&V{XU$zuyigcCSx8FM!s`x`p)9I0v}Q}AI3qPPGp#{t+_ENA8C7O5 zjotZ!DaJTU5QW~gK%lp&GlZSPC@W}*Gfw$|adKLL$5Z5+O6vvj-PCU_fxmO?zyV75 z8XTSrd1O{!wPc}r1WXntL63%)Wq{-1io(Zc7E&ro4K!}h1ZXDk*sy~@e<2g~7_2r) z&t@3~bKV^nidnhyXJs;$Icr|NU)p>}78;vrOt7qdLz;_UBRLp!(2j`r}o`(yqxwEOv*>ejs@{S*0p2Pb~@x^Hu zH48pp!0Qd9rig1UN>=(tG|jw4tV&5sOQ{l{&o>HVe&NWX@>##-waMw}$+i6U!zBT$ z;p9594|3nhbxNlnDfbVuW+^$nBsR7rJvrmvM-~#e;M_O{Jh?vtuZ+tb#p{w`2gr}T zXh63STn#UnT$x!C^9ork6B>4Sb`wJ$FeC|?tPIxED7q{QNAi%vD0A>E16flmB8hfr zD)>WLegPte{;ct9Sthtuo*0*+=pExF8yjV$%Sxs;Xd{cvY}QL@?|@MdZGj5yrymyo z4MgM=JJ>Q;H1Q7DE||B(Fg6u#apjN2cE@k|*avLHC9e=}a3AMa0Ho1%B?H(n@7TO|ErL3%|m{Y~T!xA+4+ zd+Sec%BAoA?QOR6O*Z|fW5?fOFvE6B<7e}k!z2V7^!(6^>}U6#c<2wee$F>M%O1bw zGKiT=^{mMt6|@=I>tls>ga$z-7bssm@rlIo6pf7EF({ zRm^N|<~R0ScU@2Sb=S%BkJ_V;QFaO0p(3RSeUEBa?L0yGMiV67R^ZeRI|1d44$B%a zmPiy9Ed-#WCc*z)pbEB)=qu0q7VWFFq!Yh9=3JS2QB*&zxNv5X&uN%nJ9e~oKC}iF zgd{^CrXVTDpOaJ&6W|ZIZ0l$ijbG2|1)J*>^ng!P(|ZxKSvVh`+Ko?^A4{7ubH$vT zx{i*z;#KSC2E`PM*MxswO9~S)?G-o8>UCnTP+^1?NR=2@%})+=u1CQyPX$d<1Kq+A z%vs`_k3#@g0Dx=aWuOH7=&5nj+~KJI;aOdBkq8SjGNqmgjW4?p6wyWJG*;+~6Y_I& zbMq65^%add(X*g29bUBK`#W}gUrd`QN+07Gd(jaSu_U1x;E<0H zEa(9dY{_VMYlWETaGOkSN1|BK+C932Po=_l$iJ;7aH9*0Mwu}Vx-iR`*m(q*>n6aY z3Z+oO14HrD=-2vh2YOHi5-^!cm8Gr>YIa=PT`1%{fNk6!M@R#{fA#FbPKml)6~P20 z1`0*f8q`8xKe-Wgv%<12JnQQnyXU{?Qb5p`3iPpcN(X5cJ;>$v=-S#Z(JNZ_zB#(& zYdy@KRJwO;-RX|}^mOn3?R4D907142$qzqz zTB}j9g!`i#Uv|z~v}l&|IamZg&|n@y+5C0C-@AF;Dly%K3Yn4d|@i} zw0S@>)vg&21d}bg6rRfie$4_Ve@V5ydj;9v-77!*8A=y>_n#4K++X|ocGk1~^SiVL z>vbec`N;R6hI!SMe`d3l>?fwb{MAjWtflFCm> zqdjdEvu9U88A1W&6Gxw%8{gnN#=VHsa?*bB4?V>_AimbaQ4Kn53gAksICqyTN5su zJD1&}$mz((kWj;@r>z00&nlWd6UqA4QPPQ1{onQD=~bGSDuBTM6;91O2d7F3(W2s9 zLYn8|T-Uz|(uGlC$j(HT1b)7sgrKj;IXEZj>WT+fM&LD1J_OR4Ls*l*q z(0*St?x?Cn66Xlq2=RBXfAIcmuf0F3!jl#b&CDrGE$O=Fk~`|^*v=7bS7u(Zditi- zwW-ZL2jmZbwQJY=ENTCiKfZAN(wlb|t*M++%RhlqRfYV#{G9wl`NvUtlN<7qoXx9x zBKzeX35|WLYW%Zc^=lYDzVEu5<-IgK1gx>U`KST(A29 z7zKa>5}U&3kmea3T`C7PP8?q(!vL&C%aPcrM^Mg1kzT=ZU_koGHY{==3Tvr$@}meu z(76{7H1?;&I71DJEHUJbY5U7kF&c?($w^%6EDR3)04!Cc>mjVaVxT%7K77Y zh?pqBk>{-y%(hC8Bnm!1{Hf0!vV!feb#LkwVyxaMx5<@y*LL}%dvho98^~G} zG!Mgm12%DxTp%-y23ElgP>F!e<8u@r#M`blW%*7XNs4jC{))30i@_o{144R^Rr8*2 z&`0p*=TzY~ufG2^DI z;q(2Q)BlV7uRm}~M}+kHr>C!dWnn&ErK*Cu zE0x>r%5_Y=!9E*3GS~n^U_5eSLiybZxnwPulF6?oQ?HO%i>G#=8S&=)RljeYeqj9x z@a&1IUpOl(sV3iSmhVvVt^C?Gs8pfKH-G)@yI)IBZS@Byro?W5#*eMGzbgOS`0-~wIj{%qH??L=S2NXR ztHxf1SHsRpw0yA>v zFz!3P#c0_0114N`D=T_$``GdAPi)`*1iPhsjS;ks*I=%!9eIAkj-xhnU5(igD{-f> zshbOzynpf4|Gb7RU)uk6%gU84Z}%;`lj%N}&tEE7O~uhZ@RAp>z+(@yf;-KIp8I}x z!DI5P^955(tf|OqvWk_zW+iuA#iVDpn#>zsli$mvI=7$FZGCgP-e?YHo6X_93;UmF zwmN>eWA&Yr&E}k-$*7<8?giVAU#2(g{Ie=s13AS}aA?3%B=_Db)9(y}j{!}bz<8*~ zJ?g%B6!NI+Chq$f<~O#PjBK3i&fUL_9~G&2j~%7mH(fB+3jam%K`7{~!1cNu7L~(+ zy=h;dw&bj>vBtMm9KnNrBUkX)?+a+$*pYEY0AHsXIp-+-6y9(hF$h$CqJVmdLqK&a zaz)CwldWB7-owEOwgIH1fMZBlS);Sa6aa|k1qDt}&g~oVTYJssk3Tk>_X4fr9*@9T z&wOZNx4r$Zl4;pQ*Tg=hzCoX2Y{;`c@qPYdySUmWO6x80W2*PAyVU04t~7VT^GVy+ zhnU@kPx*$lr}N4$i@LL5fcjI#@d_-FBkZq{^@S`jHYmR$t@{QVp0)EJjtpP>CVHKC zwK@aG`T{8vN%%r}=W%B$ z(_Hb|gBcG?AUFkN5Y~VkE(GrtKO*q7;wN+fJOUo29}*gAigXo;osss59xv!U`MCtT z0Y-7tL3UXoH<G9z{;ZqrR6sUVoNd1cHI&I+7p&q;$?!N3uAwtrmOGDX%no4MwBE zYcw26x2D_tR;zm3LQw{z$I14jT^sfninHcc`?<&9(%S_|Fgz!CeQEma<*PGWbp4^j|Y{)20DOhSxob0p(vRs8Wo6THMV&gai%S?{*q({Z?zGt@82bgi}jd`<0OI%h}?mLwImJ5vIN5RxqA_FrH zs@2572~8G=#8x69z5(NV=>~rmtP)1KN?i~;E|k*J)1YM>DD}XM1K28x)-O3(Ze>l-?J=9$=Cy(7F3C?I= zOiomcQC#KDxT_pC^QMT7w4}n6kv>CmQNZ``#3MQW;Ul8Q=rkAw7UD+1DS2AAFt5=8 zA(0!o*B50lJByg6e69S~^~sLO zw|{F_PIhXxNfa*p$t_zOL`Qkrd0#$!O=hMi9nQo;ugPP(9?98#=>=I?S8aao(^>ZT zhF`y0oHk=sMkaa7nFW=1eN=iTkVoP4?m&{jrHbrYIKMKwrruJ`EsJt?C59YnzC*C! zQE}jx$A82GV{%*XJUltl`DgiwiySp_^I88y9q~t86c=iP4J! zOUleNTViVGPR`iymr8w3ZGBv<)8vY4j&06#i|cM)Q)97u{jKbLX4*CPHTjQ2sg`&c zEnW%xe1QwPR>j9#8~m4DwLLeN$2j6+6B4ZEl*vZl{wrR(WvDeV%`t1Tf8LPXfbq*b zW!1kU{S_xw#h^f!DHf-&ED-(&wMYUV2B-?j z6~eSPWM;Y7&#Oer#)Pmg3sa{oS+olnaA``?^re-%BGFb@dQ7QI$e5a!8S92~PqrcW z%%9*w@2k%r?vR+n>=#QrVX2g@V=IT<{4WbG{r+p;zjT3mV*@q6gZa~+$nVMWBaO)= z(wr-w`rxy_AAe~0qngDl_DX%?Ehd@uOH~qD* zwHg;Z@OSyv7j9++e|`O1ksR-mTZaNy$`}2WEw7hQ^6Gt0{p{86?_I%@+xEVSsR4Ns z&@>7TC3|*7(9tHD?tbWIUj@DF`(gVBa;IdW66dL8xw72&(=`%gnh zzCs1%*%DQD!bmw$!sq|PoyLagim<*d!1{JI(VBo(P%#kG@j!@A$c(}>yt)?AcAAc2 z@J=zY5+y+c4O{4OQ9sO*D%dbC07Zs_2{OW>#H3(>#ID;VMJbP904q|7Nu-?yyrbMn~K9OnSo4Fk@c z)L8C(P5yJcZF;~~_JlV8LqFap?nsI^<-%FC;u!KJ(Ug!T#wSog@j;JP4s(1%Im~fR zISKJ%T7pTGUs8NphLdtl@$8n=Zd<7rjaq-iUuw=|`8UZgd>Wmb;xa~$zD2TtZ;eJ9 zT`9TIpR$UZaXdqZN7Igq5s^!a3Kj~lCj;(!JkeM~M1#cqv_}Ts%8;Hh zH12(EWcaYY~)7fzL!mxZ`r)XYE+ zt0PLtbgAx?I7Pm7M1JY^N97k^h`WTX8fIm;KgP;mi1REbqDk8un00no0QaC}BysLa zx3F|qR+-lT;-vs4*|IY6gBc`0&i*HwK019KPci|*!?%>)e^1Fn^I|@ak*BfZi{;nY zyPtP_#j9P|C%d zIzDS(x!~yqYn5Ecf2Jh9=^Lm*>{(AS!%FC^F4wi_dSGSZB6y*CRQIgzW!*cvk942n z8zGA2hoCFA71%OBmJ$;}uWT`($E@x(gc!ZDg-~`0;6^B1i7*L+hrI!1y{AYTqa2d@@6zTCo1Q!H`o@u428IC!p?{x+;^E?Y0l5?UBS4;X7dxD;~Fnwu*TU^wrhboN7w;8N~lBoLGfs-|Qr^6m6 z2+l;l%xXx>v088$i^-UZMLaqhS4nhP%WM4Bgv6RlriFS|_PQ@RG{wp~{yIG%EZUUo zugVZZ>+5|x4?i${#-&@97wLlyF}@Rnc9YvxVpFd7iqUC_a7yKjN)&H{44Es<7~^)Q zj`cVli3wAjPDi+ket?a>MUOv_72z=D&!M?0i14E< znc=Akr;1+YFkp|BV2duyO}yg#tJ$WZ$8Pq0S2##myV-&$Vlc3FA#2Kmc5Q-#L0 z5dz+Ga;S1VUEFbVF#@!6v5 zh!ce$wCeIJWPazJe&>?M~T7=80Km%%z<$p*1`g0SAVL7MV*HckBHJs zx(s}m8rCDeNedfv-)7sjuu&Jww`gIL&drZ#VT&%8Kcj{1y2*k7-b6p-jkmzhX%}o^ zbi&7&51O0JIJbx(G##NnXf$m>H~1emZ8;TqtN9^B958d9Djx*_BnRC2c=rLL}j zV9Q`vN9VAwzIkKBH@&&9ZHq5ZToNwy)%5iElvhK(!N^c#aATwm85+=@KD43+_=!sE z2Spn}bbsG)&8Emue=i;uBBlfKE3@Y{^Evd%Nyq}q^SR(#-++v4WW;ybv|7X-&TfSF~Z~hqFWjn z9O~-t^92jb3X7GG{Lcz+#D_%iDb#h;r4bw)Q78J)4gJcsQ+e}ELq&O7k#4+U?Z~0# zRP)d?btjcIh&tMkzE|nCZp1Ysmg2jxAdDb1UP>Qw(Nil@5796-_C%V8A{eLk$e?ey z-#6SD@tqmkp-Ag6eRz96UgAwV2Fo`**xVNBZ656QH4hIDcD0NsN&5PSyILbd+CUGY z76PVohI(+=cY3V92^Mu{U`eNd>@YyM5+r&NdQSb`=CjHyRK85tIXpZ7y&h^_vkFUv zUH$(}2}KwwwO9I-(JDgbZz{8>2Orrt6v2Ci#-ZE4`p2Kc8wN^9z$xJ#-EN#QU9GzY zwu1KRu406);cgXD1+m@36aLx@U1YH&13UfBU`{0vPIbGEn!R9GPWFkVOFwLY&BcM z*0Lt-|C(6~@Y!cN8*624EW+AZ2kT^AY(47+^Q{;9l>KagZGa7wAvO$?up8MXcq8A! zwzBiEF}?ueliS!RyNF%PwzEs%c5o-#1xb?2pt`z;UCypxSF)?v)$AI!mtD*DvHk1- z`xcC{UC(Y{H^N8IL0ITM%#N^|*|*s(>{fOgyPe$uPgi%byV*VLUUnb*4!fUymp#B9 zWDl{2+4tBZ>{0d@+^s&ro@C!=PqC-j57<#y<9wDq$9~9u#GYp_uou~n*-Pvv@Id`C zdxgCUBf39hud|=CH`tr(E%r8hhy8-R%id$ZWWQqXvtP4g>;rb3eaJpyzkxN?-@$Xy z$LtU6kL*wE6ZR?ljD61j%)VfMVSix4=7)jl*ytck(D6&0XBhW4MQVc`T3P@jQVi@+1y^3#>Y)@-&{#GdL_q z@GPFqb9gS#c`5L~KH}Q46nYZv( z-o_)m9ZCR% zG2hNF;XC+FzKdVVFXOxU9)3B$f?vt6;#WgcbuYh`@8kRV0sbw19lsuQ|Bd`6evlvH zhxrkHGygWfh2P3=F#jHZgg?q3=tm{3-r4{{cVBpW)B)=lBo#kNETa1^y!cF@K5wg#VPk%wOTJ^4Iv!`0M=V{0;sl ze~Z7(-{HUD@ACKfFZr+d`~27Z82^AD=O6Nq_;2`c`S1Ae`N#YZ{Ez%k{1g5u|BQdm z|IEMOf8l@Sf8&4W|KR`RU-GZ`34W48H>a)ewVPskSv z1n}a7VxdF`2&F<07AV6)nNTiN2$jMlVX`nqs1l|M)k2L>E7S?~!Ze{lm@do^W(u=} z*}@!Qt}suSFEk1ZgoVN)VX?48SSlMn~gl3^dXcgLoh|n%{ z2%SQguwLjEdW2q~Pv{p0gbl)=FeD5MBf>^uldxIXB5W1T6V4YdfD*|zVN|$CxLDXO zTq5icb_%a^VW$O5rNuYT+7TuW+rfPuMRU5WXc`CtNSwAlxY2BpehD z35SIv!p*|Bg2=@!$6&}#-lRA2uhlZryk)f_u z{ZOQNu(i_|>Dw6T=^uzlop>G=hlZO6&2(vs^bQPf5l29^i0xfHy~g3rCQu+95kA~$ zpm5jFFz@fy4@P?XH%1Iw`}=#Fy84XDy?8^<5?BLfsCb@jFMZ?+8dG;e8Y?HX+DiJ;Db zNb|4(OEsvfP9rr%DX^!%wOefOY3?xNW7-Bf`}-n8=8gS5BfXI(w8x?asREN09vRSY z7;Notix^ta9k>g_%^f0sLt;yRf47k?w8BdRgI#^Y`qt*&$Y8Tb%PZdZwCTHso3RjD zh9jGYn>r&z1)7!crmnW(PBY$h^fmQF+J~)b5KHE8WYD5MD3qa14X+;=8t!V}BGR{5 zy87CXPR*xW!>{q|sHvXV|f@z>l%BMx zL8TQ&H9Rt4Rs#w|C|yKwgysx&ZH+XwkM#6dweV1Hb5D;mvbnXVxwrXrv&4?B_F)l( zV>{-^V8j^N0zkuPm?+TN(?1lkqQCmO`Z|=hOX$zOh_SV~C(_r}Jg6VUR-wPw(AwYI zi}BX?Hh1(zhRx&sH8OCzAE|u+_u);E$gmBcJ}^Ku?5h8&g&CfB0W8p zR_fMvbnI}%+=*dqQlVQ3(tI~4p^*WTa;FZ7Qh~GS3`9ns6{8g3I4f#o;OtCP3~+dV zOGLkE5Ocm$8g3ry9?}D&qR&h%gI$sKR%~L-1i9)wkvazZM+Sga`nn|mS5 z$Z!*VDdq_UF-g?`b*n`UDt(1{1I*qxBo6ft0@QF(vKf>RCeQfFMj(PULWMOE?d}J_ zbO8R_uq3tgV~i~tI8#dNIB3%Y;rL;|>o9hC14cmlAjZBK7!f$n4BXxcq&d>lVgz2m zICn(sN*625pry;IKB|yvpry2_x6OjQ!=3#@==_LrXrybHM$AY+MK$VMu~0=KSYi5s zm1(6^mJ|AfmXWR=%$5!#G7r$YV`}b2?ah6y5q)o@t-EX3(oRi6E$bs_dIal0r_%3Y zdvSXts;z$n1J#6f;!2$veO8PLe`iGj{?2-)Q8Ay%Z&8CvMxz=gjH;ARNeyk0p>8Z2 z`kv+ix+#D%Z0+rDq3=>=qg8`<1>VdXM*4@ z*#IiVra)PRWx~p085+Ti#PsbN09cQ-s39aPFSQPgY~4zI*A;1vU;(89iOR8`2@;{B zAL{Ii^t9Q>7aFxSQM5!g0lfl-M!JSN(W8Svb`e^5Hn+9`L20YDf&ml&IV(m5kh7u) zK~2o0AgIpa-ky-yIy6+O2W$dmnpLby9jRc^A*_xrzrj<OOZWXSXNDEchhc(j6pqt1Gw_b9G3NSBax3s%#S zmWaBvX%FIN46}(YO7!V8)R~4hzzv9MpmY#`n|t-`plQ1Yh32+CvAv|M z#NN_1+ycZ7Y^)9gFk#Q2Wmvf>QI4K|RCI=zvQ2m%8JPH%;L17Stvbawfz0jSG-SXu z9qjLFlQ1zxHlvwcEwr`_b#EEKqSik$IJ98|ivq|2fJ(o<9cZ~HBGQEx@ZqijVQ7Sg zHXJt4=B8_7L}(f5;2XQ8O_8paerz22@P`Ct0lV_;m<}rDrnq2?`T^r>aF0rY)2pz( ztsnG&vi;CHzpUK45u`Y%Ql(8uRbFgUS2iW0sh^?(bSb3^ja7MwE@8Tq(WRU&6^4<% zu7;ADV)S)$31TWJQ$;B~Ql<*ZR6&_4C{qPxs;Cf~g2hUX778Ipuo%?@i-T%uwJ0c9 zj7-5|WC|7|Q?Qsal@!y3-j-0N63SG9YJw%GCRjo_N+?GOI4p?)>g>sZ?&8yc6tS?auu2)h})>5rX_)S#0r9Q0P zsqi3`5u{p!RBMoG4Jt1vYf#HNjVcaN#UUy-M43XADMXnfL=X`ohzJoxgo-PqjS=8d1PLTUR91*UB19k&B9I6XNQ4L^ zLIe__5~?IXl>{gU0Yiv@Aw<9sB47v+FoXygLIeyU0)`L)Lx_MOM8FUtU#BTP9k=(tdha0PlBIdGvI7<7av2Mv0N z20es9$AxmxpoeJCLp10i8uSnidWZ%+M1vlpK@ZWOhiK44H0U83^biethz31GgC3$m z4`I-8p&Wz>LWBuIzy$4qvWPN20_EzA3Q$d98u~B|eOSW>fpT>^1*pC-0YI1lAWSGB zOt2KD@ekAZhiUx7H2z^4|1gbzn8rU$;~%E+57YREY5c=9{$U#bFpYnh#y?EsAExmS z)A)x2>a+~hXf3Q!=X{_hptiiGRJ*GaE>NR2wML!!ftoVyeYtiYFRw;>uGQ{!+Pz-8 zPgC!;TD`Sey|r4swOYNkTD`Sey|r4swOYNkTD`Sey|r4swOYNkTD`Sey|r4s8qy5Z zY4z4=_10?v$(?k d0mRO}xo^G_%I z2O^L=ATW7lM&^H<^*^2eAN0eSJq3(x4DA1L)&F4euaO6sK5joV1E+r+DAqq4sQ>Wu z0|aVj?P25hA?l{GgpFa`oP%>HM?@(=7t5y$lA|Hyyb+&}%lcF7Py zVOq>>oZbI%cmJ;c1Ox&!PmnY&6cmq2?4Nt?RBbj#@*S#u% z($dm;AKJG3Yv)w@yrS19dscW!&dp@T$utcaiktwRu?l%Fgn7##v*Q%&IaI$|O!P}5 zE!tXI-Ss#N&%~+2xwep6)=D=@bER^nrNZX=A{Jq3H3E=sm}xcLG|pUA-88}8wRPyv zPnoSTxscjcm{McuVx_s+*=h#*Xv3UB1T}&E{uxPi!CD1QZy{>6F_-GvT;_v+@h3%S z3~p6JKLUMaO+O0%W$iTHs4{|UN^?L;ts#@G+64bnV>gujTO1A$SfkJKhUN{&{#iBu zbrz-NBAI4CWjjIN*&fwVu4RubbB`IvgcJ!WV;{$}bpWy2K1lw(2Xe|eWcN9U#V^J= z0v&sgD$Y5Kh^J4utKJ8w`)YkScnEwZDG=2~oYvdtqau)|6HAhwqW$r>MKydMdi-xf z|IPEi=Mls`ySoS4Uu8Lk>GP(?uENKw#l^+NO;vrl>caNS*3!n4J~PMG6%1?`Lo`8D zP!I`IikK!Gm+D~0Tx5dT2;-4lEPJvvNz@Roxn4bK2&F(-3ukKoTzvdLw9r!ZsOd)GFakMtPqh`I$P>j#E63N~^t! z8t)N`OP-Ey8cNVPKsgcS6B*&w9LA&4rPERq64J$9K^)cnN)EQxZgj#nJKXDP(AwtHNPvj4d!y|3WE|h>aXutjp#eR1Va1(D~!1cD@#G$XK@| z8ScdxW>*_WC0A}fCWQ_Gk+039h^tbyU`-AaRQXE3C@|xuc#bIvB-u`7jVA9qExYjR z=L}OyA;5`@PuJUM+d|rr+H3CQORerU?U9!{Bot;XUqe}i%R=!=DIcZf5IBHt${UX7 z$u&nXerDE=@3Wd|0@Hz$q*rpVDJ+Wsi!-OJ!$UKaeXQAz3oz@z3unQS7l<)x)linz zAH493JdOfC{BNrjX7CVfZBLDtgiqO>03bm9Y%opN;dZI*d!CgC7s1So zx$n!T6vhxG4g7BozT_i+(EXciSh1 z*WKx5dLayUw$Hadz3+<5D}%BZCKe`cE4yNK&2O zC_2B@YGbYTJ=@>6O14_I7;gA)sBiMPW}zMqr`$mljy|@#K)X4 zywlOE7bt(D_<9aY(j=81rYh}wpQBZ2>BFX$_0y{XD7Q1jV-(PFSPU`4DYgBSjuXGW zB&TypZ4-Ia;ZDv{*YiZ4BK%bLvA^d#3^`kw)^(lO=^V#PS}I{JY8vD2<6?gDUgByH zoos%w5n5SA70~&_wmZ}=sE_CH+$5D%I~M^tEkJ<ZQI7BsvH)rso$j0Tno$9{71< z@V}SCAhApjLIvlX0Pxk%zZqkf%M1LSF2n#NI}?5xPC=! zobSQlu20xcw~DY&-wOel-n@?qJ&by)A02bP=f7VUb$6h9A&zxij{$poi1x&>usk&q z)o~Zd^jeapPeoI1Jmh>Rc-6+ws~2@GiSZz{hBgw^soz#me0J4++L57M=6^+@00R~q za2yth-1NjYw%qz!q2gOQL3>x?qI6L_n5iR9jUE#0ppndAXQSaxXgAAg+?Y2ZVSq`= z9KUjbab4|QH-zBoMtL>BP)ja&OJ4O?2yYF#*>9aH4X@u0(otsJ5@}kXX@!4~Fy4Wh zDN>w`7i{CSlIi9?H2YDBB_h~K`_cJqA-9`a@G}pVc;w6b)PGdJz9MqO5mS;`wb~72i`W#}dhh!aglheCet+(79kLz+P{)7XRuyhb{YxtDFZ#1N?6e^# zh*vvtce7F3I~yiY){1)rPtn#OV%8zxe}b9$IU5=66PVl01yCBSd^dXUKhK1G0R|IV zcvk_Ac>q2IN6uR13{;c-_cRbEqYJTB_{Fr4IijaDP_s&jXx0$`sG}^H^o5 zz-Q`#Xift$p?Wb<=fxuzXVyNKg#>QnXBe)ocjuyk{hgW=c?V zRs~?RkX9n-Kuh2ogdASyGctZ-79U~PP*d!u<<~CRR3B7LYtxF8T{?!Nye0d%0n1-I zI4RC68nKpBKg^rfqiJ-i4HXbQx4>=dyxjLao>lA4TIu938pOX`7jX~@WPeN@jr_P# z^lTrnNnS5FJgePCzFZ$yZEE2?4_z#R){UKOsw3qqM;Tb8H@A2_3MP!1!fsit%Vn(B za_2OfhiiPV49y_-YDhUHAURUHq=tlP%rx5l^&mD@G^8z-Y=Z-tIt3L`u!>WVQxz;^ z&9LZUjm7~;VIecrymMSz9sAiMQWB|u=tF>$?NZ<_+~80;Rt&KJZ1cdqEdhb%EWus! zdJaxE0R*U{g1~6{#~l&e3R1mY+6nb{2=-5{7mcd@paR4GV(zxv{CelE`s$Ei#`XXd z)c6s?t)+nM8@GOItmYqze$tkR-@pNBhUdU3!dN9ILMYJOj4^aUvZMFQFK=P@cL1r6 z@U=sJ<=N(Bq`QQC3-wJHuee;+1OIT=^WJf^vichJbLK-(8A>DTum-ya`_|C7PvY^V z-X#zAoguBv{!+QTW6rx3-!1S_UiFDt_}ti$D*F?fI@AHKaETKn;7R7C5HXlh^h{!o zsrxdvVOX}7A?4Tr{6o+@q_3pMQZTg)Ea1)Q8|O#l$}N5<%GqV~ZE>N)M!~x7JUKA5 z9t(l39F)9Tiu!T`O`2ZQdW$v?+Qe4m558`xNHnv~bX8j4G6ay*PnvTLCWgm@K+IP1 z^SI~_P^NN)(Qy;gv`8wrCM0r zdu^7~mAS%W$G8dDhB^z`1T=lN-^sNz%Wcwkz4|)K)IQg@u1iEb91XhJ5xEwYDfvM6 zkLOfT>Goml>)dkK7RrcGd}4t$1w4`Vi@x?8r-Xz-T@erhoTTvYj;62sm##V72KMKy z7jCvo37#eEob8=(e^%k-w*#CwiWcoBL~yaY-mZ;3#7$hwrE0n&Z&_iqW9;qZ8h>;~ zOjAz(rmb4$^7bp}HHOIkg&1oXJz&O9f5ETRc`KDiwH!c>87$jXR}9R=#e{N-{typMNosUZX^8aPu^3Zb=_A_|$kJ2>CKI25a~u?@$|xUD0E z3rV0H2Dkhmtcz}Bqr1R;PGC&s1*q_(cw=w!eh^JIxmYy6ip|~R@0t~6h9kSKF8k`r z-rmZ)soKb2jgHIODnmo-1=6%KLu=Va>yJSJgYnC@P2eB{+<2U~g=4b-hjNb|x!65z z5!Z3c@32#?=kl#m5f8>l8a@f=Wi6&X>j+N1+ruaQG?CtDV~PXb>@WWf2Q($z>z7U+ zMBlz(Z=2s-T8$d;Ue6M3l3xRuVhSxm5s{3BKIpgmi-?-oisza zkmgcLp`Vnlx?L~qe?(H=WYV)H)PPR{pA7{5h`m_l^X{d`q$MOR49YduCf{c>9PI^G zU)!twAe$_^TtGrD{jAw%Wfw1k)5`DgJXWP`-7XNQ20MryLW6t0#t42k2 z0hnOio5PA`bpihQ)A=v&;|;YU&l?F@fC_Npa}OspB^Vr!zTb{NLwi)Hy`}19z@fr? zU3Jh7xd)*wL=El;v+()ck_u(iI_w^muPd_R6?OAcCyxtX2(vAWE-tjbs3u$PJ&jfGp*j;7`8P+@e0HF88@NU#6t?jH*EMz0L$My9PHiB zRVebeoyHC8Wl&pm$IT(G**{Utw9Bh)HAE_^TCH*ta-8|<-fxJ&aV4hWUSV75)+$)r zdIu%X^B9`Hh`wv*IW6Ho^#zL)v08Di99QNKyQ4Ex^x@3G;Cg6K(hX}D-{D_(j!D%6g}xd;qA)E>mv@<*$ZX$rUpcaK+~5kxF2pAac=%N>3B`6+-EO>fzLHkzfcD>r`}fy+!N&}- zUH9`HP&unio@pV+24r=ON7xE68a7?3>8!kAzHyK4Lb=YbvQ+HBn+||W{Eg?GVcYQ!l ztSPK!t!;Un>i4P0$ET?I9pdIh^EU0+RcYthPqRm& zPB}LVBWJC5;`qzHr{VN*QZ9;5?qvVIY@^viP)2>OQxb+mdkWDzLq#%PR5z67y??M+ zSjDiw%%q&n3QENt>Lwj~Ps8*c{0xvFm@csrU=eyiH}Cpb=6h0&O92O%dTc0WV%R`6~bS z;QT3eZTz7V7f#K|S{Kj{_}e_u;Joz^)V0uvH!H@e3WnVKG*Y;R5RQx=UKb=?4!qeb z=_DKa-vz<$?}ZxrbHii^hC> zLN`k`gS9^kaeye-(%)p=Q!i(kFa)B=q#!VbG7-calS3zKZMl8Kg`I^HD#h_iN?($! z>66rNVaPiYq<@#JX$rYXkw1$h7(yVDzNky$V^i%H!;0ZYI+ZXhW#@zfK7#lXMnh2Y z^3kcr0*7W=&Ss!urbd>4di6HWv0K><1f+uu%DQIF7AJcpusQzmE==J_e z-fwZbee~KU31mUe(k?U$jD<>ni>OKvN0|-t=m-(#j;6O&G~<{8=r6^gv3$D&K-xY8 z-A~Ae;#6^CAZ`&J{>W;EQAqsZ`r@~1+yiz(zXcIDK*GBO!0caA&f@eEcUcd0SLAp% ziK^4%9xfj7AK-j%&m}#)l$Krz(B|KAu~u{JsH3mYsRF-@7#pkE z;OJGjbEEV%#{Qt8>G*G(Vfh9<)rQPk1eaSAEZCJ)F~PoR(h+g}tl-VX($ zYO0R@KF7}dH^^v=pHnQ9YSNiTJWm+f!v@BwqQ$Y$ei`a_1{_|I-ss`3Ry;b`bNIE$Rnb+z+c*ky}aexvI*zKtJjccvTTZIqk!Rw!$+NgN&BT7q-IM^YM>9lAFF3qsj z{Ui)Y_-SRrj^=N_HhESJD-ltQtL~Y=Od(%jfPRpq8P9`F;O6pc)s_oF{z{=|n6er5 z!u-{h;{bvm_L%5agg+m)4aA0YAb@K`Qv~YLWx~sGmt6*V!|?F z%7PdL2(eqp+SqbvQ;>6xmHK-4tnG6El;(blqDJ+}Q2=*wlRYGBr%&K>9+K^{Aa z9GQ#O*$%Ki>UYmph71RnuwA?#!9vfTIuG|p%N;AWWwB5C+IE2*>xGPGkT?t@?Dvhd zt%Wpg_71*1_@0kBba@@FZN^TvjpVY+rkq1h2gtm zJPXCjvMjf7K+`s#pH$0kv}>*SPOV2H-e;NChSuuNAtqhRtEe-DVqBG7vr*enVEmVd zAv-&^RqMyAthD#nN)(w!Yp^GI_VB1e$~skiRlP3K6DJObNVTJM{r0E+{x$grTNFbh z_uBsc88W7$jtTI-pPGD>}Uj((F_m&nMmhI4lhx z;SZUOC;SP$w;q=0ux8Ozq190iFGeAoD%-HBSfOO9W&PK~Tem;KeV~3gA0dW>Pv6I1 zYNn)N-+Qq-I+AJB!=V9uxeoR-tL7t;-ZGy%%>9l;tMtQJm7z}(vh)}z8v;!QqkT%c z`Pr;kXU{<7gZGe(<&Zjp1|1&SGt0&iI1JiBIdPElDo}oD(oS=FPy1_j?dy9UkEB(@ z9bfbpt~myqXy`*o?NPpA2S*3Iq3$t0QzT^=d^GlO7pmjpsXe^IwU{J-P?mtkdD4jT zbfg}pfa66t&>R@5s6DBCTElqWD~=VAB5A$Y$g3nSX4Ol}s9ozugn47sFrns|d)D7D8mh1^h>F8%3W z2a5TI9W)%RgrtE1+L(i!DwwV@xZ@VytBSnvu3ay?9Y$%KBd@=bFp#4X>B};lBl^>;B5%>LW8TFDeNLsW?@@;#fCxMm!*pX9lfHt)uuajgiV$d zT#h**{Ipyhjltvp#_fvwZ6(9T&)Rb;VTsa~=gJDe$;q~EJzFO3Apn2EXrlA~F^1;i;H_jG>WmV*SvFHky zf3twjY=>%B`6@dr95pk37;>@x#zI%UP>yJ?6%2RCAY-s(SLIof9c#sG+>FEDjD6gU zD+r3UOyZKt5Q%XW6oZUQHH@|K!@vgu>y(j~#NpH5x9l+GPE6*P91EzHBE}krNo7~5 zb|0;8aj<>dJDCakJW=LK#vk^V^`8D9UP$2lLk&K$X+Ag;(w#ZeR7?dFGzJkJMi;Oc zoicM8#T@0|)<b|u?YyW0!6Ew$>Y~pX2XU`J zDYoQ`d*fm7~YwxoZtL1W7$X*5n>+fi8oUqvJri& z6nm&FFcO9AAX=7k9_;yussklMDtxu6t5OkjY3tvL7s1PUqGstoYssPT_ItLMXX))Z zJ03DK>_IPJgIKX7x8Rw<+?!kIc9MEA5hw)}5-iqzE8VFOr%mr5VC50inCtJ#tAQL} z1%tXg16rH5cZ?pPJcaYO6~hh*gGh%x5*s)RLDozXG<$(Q=kn_7fh78e%R|8C^X%4F zm9*vMr4{4*^7ibRo5iK-C*+ed7*^J_i&Im+>V~x=%ybD)(9wLptciZLN_)YB5O^v@ z{$Ja{Qtd!!GiH0^v6Ue$NG8nsD)~)N*JjWChU+1?Ny%198}eb+iG#cLFl;OopkF>K zIJg1zG{!THV!AKNdnO5aW zt-47+g@#B%3Z{it%Q@M`87PUsQr8-l>(V z7?crSbh@OEA$m#}=67-ZTp889W3?AU=1tjMdw;Ne(Izfm0-RQ+6jH&8gwGA_(Q}sf z2cqudmvKpmxhIPXLGEOm41F$3^s>mhI5{xLs3uHjw&8hlNfyhYWJ>LMMzm7Au8{{4 z-78CWHW(hd0`W;PqChl|g^3)t!&RZbm@=i00BhlV_)wg0=hMU42F)9g3L@3ao5I}H z8I}fZ8eb0a?<61oj=9=X+T!Eq!RN*aH=0Y9i8s}rg8IT>C(zNJ!Th>8L<=0PZ>~y% zhz0Bh?ag(U19g*K4YsztBIx+FBiiPs)+@S)uF6ph=|=6xgUL*jcixtPvskp*56`B0 z={4aNiYE!i0tq@Z1;pR-k?I3o>lQ~?sYinu)T9ag!9h~z6;ikT8&2oT|A@)-z( zaQOIKXY~=W6~KLycubCWOz(G95I!BBDB0Pny<_|zlgVmqx-mrqM_VmHhiBtJ`$Z5w zCPrd45%V_Ko8gYvDbKOB4l<(Fy#)}+&?NnmY-1A}rTwO$s?$(4W6U5%XfMI)w58zk zbnp#zcaX9eQujFlW$d|exgN>CX+D9ODCFX{GoRcYei!0W`_4DPA4@ELI0BSq?GTP9{qy5{Jp>{!$ilU=1r*;&BcRg z$*q-IA(UIbR;y$MuoVtrm}_sru-Iv6QF-Z$*v_HQLPEzhFGyrl8>MSf`fNpzygHW~ z_QJA574ufXwN23TR!mhNU*^BKQw@5<dJs*_=x{mDYt5qy%uW6HuIrYQdUw=BHHG z5Nt@%wEdaq4{)mv_E2B_!pNn?M`+Gf3%JA^GCHQY{6Z+#==o?VMBVKN&I-5tw2=+-ea|`(iVDzDkf` z_o4ZdXMG*j@}fOMk`);6@zP0?jJxg|pqYLnuYp;NEjq=E37d$523+{9c|=_m;Y=FC2zr0q z9ABp`#xa?^D8x?{^m9Pb8P5(LYi&GbahTA*2ISmx(8c(0gM7mGV0*-m^P2+5>2y*D zK>!ty(}TsN$-pvPyv8MaFTTJ&O7I6s@>;4;BIl36G56wWqHwlP{~pWLHf$Uy#0Puy zeV;G?gvis^Jxj`$>M5o?zm}_}UVzVP!9jt89Pwn(1x#nRAN`d2;9sJ`tk0AOz$1+E zH{8RxgaNe%M&|1hrS+*9C*P^Q=fDJ&p_?m6QWaQ!V5kK*vuF%HaecM^I*D{f1%Ubp+IA5m}APs2n1ZJu)J^J{Rl04s^nuyFN`DfFR|@!RJFA-DyQV<_xaV4SNKY62@hT@DgkLAq~ zhG+%xacHfgNfA`ZaU>zuj+4n`fU3TLj}&960XK1bcKm{wvmh9SVn*;5QgF*KxDXp> z;Zr51Q6HgH%jqJevB^Jiu6LMSlE`WNR1ubZUzzA5+#sU+UBVg8!D?yT@>=FvY+EEQ zC!*yn>I=^d@TLt~CRiEKJXWgp@5P+?!Jd%4yZjSDVZ z`OkMD7`^B2*g{%}qlKpgf7Zmo0$lvg7&BQ)Aza@3G~b|J$Ysk*P8I&CB}bAMZW-~Z zIR_wi6Up0t%hZXSOGa=}k*;=(xjt200^6TTRMf=`GX0xknXv$dY&rT#xsb_X8RNyA_$By$)d>6vNs2f?oR!rfdl)uT3^wm? zQwUBwSI&b&0r(I>$MjJH`fi%N1_>bz?&Ie_?js~TGj-`X%$+E9%n{r<<}`S$e`-p) z=*`trS)6S1Q%@D>CURjquWCtl()2l|<=i+Y;!j1i7jdhWpckp=OwWUJ0MIi}l3TJ6 z%ie2wuVKrrw_6uhff+-6)=_Nlw(qWRJwWbgGK?~1p|U<-iQ8R_>vJhnE;jiLPcBi1 zRW@hF{B?5XRh6|AR&h%$^yWc*ouol%@U#QTr4H?XOSYZzd|Vm2@o@5F7Ops_jl7Q) z_!ybL>GEq;&gio9wM`Qi-TlKa5EY2IY0@jteHNx%WR6`sJuJP1f$&aYFSPnLp{u4Y zEC0QDql)X^>kq8ecE4t_gb{C=2=3N2Gdry^aVqO$<8QdOeXI3e?r5`^^}Z(42qSR{ z0UzZY8>scj$7ip(7LQ+vQ=uIKkHj_~tcpcgSP5 zl5+MbW(cv;e_PPRsa@@MkrcgqMx5Z%N!L9-bn~Ur<+53s7!rjk3?KlB}I?)Qdv;%ICl2PJN$ftp)ow;+k%4wA>Ck$|vtQ zY_;32dscrw)Oop1ekSSV`gS{<%RUw@3VxU0lDzU1SQNO$YkfWP$ke$i6f&=S)<#|) zlsaMpADLw$TU8oa^N=>@h~Cf?=Nn=+j|^}w(vlxqQu54&1r>x{W^6ldqjSsVb<$rwy}rmwYQ01Baz>U?dDE) z6Enk8YWv#EPCC25t@EorUGU5O{POaAz%~D^imu19F!K|CcOQ6u9A(3jzt&6Lx23hJ z_sY^Wy`DrdJCS0duxEW>Bp16>_r;eS+N9O(hQNvjVv4ZBkPTG)KZS(quq)nebe34H)H7M%ti+!MZpA9N4oWcss21+ zAQwnD0vc>}2(d1Q#3z7x%6;?j6E#S26$>I+F1&^X5Yhyy)jZx2)-|Upucn@=gqJ|1 znjL{ulPOb0eXL1wk8Ah>PJa-YixeC}tZx!&A(kWBz|&k)2zfAfgt^NQ;Olk0Vk3P% zSYd$?<92$LGI`4r+F>*)w>2H8@J!QRnSiB-i2PD1f4t*yB0TW=VEPmk1ex?YExNMN zI9GtnDg}xUYG}IWCAHvEm4{~@{-51el6Asc*;aKov?K-kv&2q9S;tVToYnO+c-B=` znQKkgiC7CwY$Fiqj<-%#M!D%}%W?y{P=lzvRFF$pViFDB=NX-O>E6kM3WCB9`o^B* z{MM$j4lm`~NPO5-ia@%@awPiq@h@2GFf=ysU@*00s(yk}5oIaOg0TGff)nIUWYyxN zcEn}cZ}y^F)#s&R>KDsgsBwSUKb9_R?p87K-R`$x3itD)iTviK$x&+bcHFT*Q!eFg zNcceU!8YQz_sVsSd;ERa>;c4~o)C6(H5wX?RrI-;Mgfj(au5r*P)ju{uKG+ds!M@l zW?klvU;Oq*8pDCohHSQ24f7DeFk&%(PZcU>rFa>O6fcD4U}U3XS#+b?NZOc2maoDf zS5>B4E6*}7JnfMM)^Z2!u|FFCSETDqB*+}eo{nd-W7`sNQ!;2e+6~Ni)KbM22iZWB z%yRrZnm~6U0RBToY0kZLy)+s{VKacat74^qa)$4)&Ph1*?@Ov-g?MMEm?8Zb;eqt! zLvhaQgRdzKuk?`*jXV%Juuj*{CsQsj!V&}8J|X^iw$%6jIW)vwOI{HkFX{!z0lWlKgw@5_{( zOMVy%4F^Dsc0R@>XubIc?i6ec|UaBw?M>gea5yPFzj5S zT>m(ee^IdLw=-~?{o7xKpf^)qkrM(2p!((az6XGrED0(FM33D<0}i-zg79zA=DNXS zEsb+Zs~m#O<|j?o&r=|HRfL83{B0M~P{4zigdGU_Y0sk`&i#!eN@q9FI$Eh0D@$c= zHCwJI_FH!WbsFo5orbP4n^#UY>8;Ped9MS08=u=>R+PXtTkh6>nUbtX-mk~TlT<&} zv`4nQ78`LiHas=DuR9r3LjJaDID5~MGzV7ac6>D$N#lJ)K*b$#vtKZ<$~-Garg^@I zP>8fe%19Y_zr@ojHZ~{hg_(b+=~elZnQQ=ZFK<0h^nP0I2;dD#pcOcEKg%FDH|FA= zgCO~T$_6o8I$2SShA9w6s>(w(SXOn4pJ?h|oFzAC(qSCg$%!_$fG;Qnflw=yLUdWW zA)3k1AMBe)===HMKi6Z+RK3K-|6!Nf$WbMb-SFwgWqST%&t-)@hRVSed2jSKYbX^_BIu^IWwbNF9 zpJnu1Rn|Wqa>o_q$=jWj4UQukG7HKuhoijLbIp1FaSe$CRlFxs!%%g2>DL85wjvj( zy86kPCL7BS#|tDau=B}#QE|ffG7?kw$s+S;oe~>*PDr08^U!7HjxX!ohnTQt-D1S< zv>{kD2r9{5>ItH#v8$A+WSK86m8%+ql61HsP9hz+9q#mvT0C!ly1bL)-)G``ieJy& zd%tNl6e$!ua=U}>dM}XA>NTG{gA*PE_J3EIFWC8k4~p(C2wkZV>yfP7W~hmm#ntLo z8zO~R9Z9@lS@sMv$@L065Op;&QPR1FUw{cSF>(@B%9&rewXJ#8_cAc=o6*#1DT$xOzeycmC9E)Kw;29{@u_qV|P2(ZS zxS}xa+vYYvo$*1@$w1$QXeJ2ZsA|VX769oq82C&5=~|MRo4VlmF*%RSB7`4{P#pDd zHVO!rfZDXw4$Zpt!Il+oD?D$1+{uEk#nJjBK(eeJY%HhD`*}7)n_Btv{`Im!O4a(D z%EQ}+PvTbP=WADI;~|5XOqn2(kOqamX)kKHqw#y&_tnem731aRZGz5@?m$TdETNl9 zYS>UXk-v4THB7I;csa~%`a0{~6#Le+(mw=byX1PI&dDx!XDsGYB|_m zcnJe4os^9}S8d;{%WfLBg;;#j0-p7l;vBtSuFqcnEiu4ur+K*sVg3u1YtU+w(t}S* znYH047Q2SAnx}fb`rn$h^+M=ct#RG8&mx;^A;cRG6M`R-O{L-D%KMi~ug2yjTfo~> zH4VQ8Mvs>gE0<^aSeNJZh7>i+(1$u(`q{(nwWQK^YY{7>(QcDGjqqfWJw2Vyf}@0< z*0q@`%Zi=ABF2bB1I%U^tnxIB&zV$RNhKpCH@w6qHX=p|SL^r?GC$PTAhC+K`1sxu z=1&f_c)8l2Cc3u2W@J%(6;VRUbf0Btl2F`Y)VYf`m|vxeoTi>`gW96 zdvwr9$IR>Y)MUHq$%$rM=IkMf`b<@d5=nY#^q%C`fbwITF7v&Kd~K}4z;F$*^rQ0@ z4Sj#ac5hQzCLMN`*^3>aRyVd2a?)5z3k(T7strykphhh$nsZ>Qc7_&FaAzY51H=Kq zn4HbEn!l9dl5~X1xNQFng5l~P)~B!E-}j`fMweF^Ns421yno{$UANe9e-h$_dT3dQTzRcqepkzHk^z|s)HyzqDH#~EbY*nE z!3acTnuFHKm4Be2=5dmGaC(Z~Y(EH2Sh?kod(}((&UA6`XTR-YOn2Lq=K8Ed9J;;w zkQ210aTLZ=kK-~tSZUlpgbb=&zrtSoh^z`D-34aSz#KFN6OkBL#w9Qm3&c|6wm}xW zpST@|N0Y+_&$;v!^lp@ufMv?cYmi{r4I{lR1#NwKkwjJrH|5aRv8PE^P+iKQnnsxV zp9t{@(G&~gYy7pdSBcci0$eh7${KG?ZP|P5B!Hh!V~Ydjpyepjlz9e_y56W~f?UN1 zT}>?Ii^u;+sVa<|K{^5K$KG$V_fNK*c-!7`SKC-ilQU~8d^Yh?4bl^Be3ZK^lT{8= zS8p}8Foc24u}xec3~k@==9w{AJZg;u$Bsi94Ws6U%vuicdGkP86 zxPP_v64Oubdj3pnSIZt6EKDi*gaANFtS^9aDeN6?*l&Po^l(+nHNdVjB*mkA<#9R( zcBb{DRXMY=mRP1rN=ufcI?i2TqDX}okf?on<4}r zl;fjdikvb6STV!q@K~{=8VjL*l6Q)k40Kr!tD_9n-j}cIQH4J3L)rJNMja`rb^JJA zOox=e;F?5I3T&fsrC0_^(Yus3APsM;-FFE!Cx%+-tsa;5@zPj%AVh-)t$ zF+X@&4pt>X7%PsBv14&KggqdqHG1W^!jSt~HJUay?gXlvWsLkQPE0grR#Im*_Tl>X z$Zi}x0nE$Bk%)~}`lYFe!RX7JuD=ox%p`whlQ6|bqgsXfHaF81jT$YIL9{f(HSak? zpn0T?m@}WjLFh8hI=OyV6rERA*m#w}U1h2qzjXGbsml6#Jw&N*zdT-dd=15Ie+EtT z*#yE+H{;eR8(c31v!LGR%vg8(nR?iWQ!X zgB&?&SyDYVk5FD=GAgy6YMPzYc)U?f6w91AysneldB*ZfNwqr7o)r^k6yycj+5=oG zIsm{uOIXjQV$7>=Gfq1Zc(Qc~$x7f?D4xDB3DhOeHps*Sz*-D^I+uTCI|L@ z!^~0YFTBJ!r7pCmhdi8L0w%yf7id5|2Cex45Bt0=AS`Qc>_st%GM2eiFurXA8)&vn z(v1_c41I0zS)vsNNO%C$bu$RG48L{WZ2&C)?)C# z>17e@z3yu@{by7YpJ=5K$JiT#A#la2nF;S3f; zDSR=#+R(v$PoqqAEtF7EmCxP>bl;Bz4el=aO=r4jf0+oz{lpsf`JTJPo^$7U#Lirz z*rL0Ew*_?NZcc0iwo4?}+q1LDEVUGyv&xom@Y2<247cIV0>W%XhlS_CXn+GXfhKB1 zlkLEMF9fYoKw9yoIFBEbwmtAoO2?fPtK2%89$@3BqiiYqJ(gJ#O3CSZtS5)QCq#Td zD;_7RGd7geKFUW=+l}kCIyx@xSzhNHB=BU*rOC2NCU#BeGr7%XUc3KTRu(22MeP|OfeK}h6Sw$9 znybF@fKbPT$!GsTdDghElPCbj>FE=w$Ot1AM3OO`xCeU~O~LnREf(PRSZF*d#^Q?o z>;6J)+eJi7qg3szm{M%>vS1BMpTSV>egNC$?5H3hAr1~m4Pbo}?=89Nzi~9tHbPTP z;2V^AM16l1wX0b{vq4OIUpnQ|fwiRQ8kTb|JSWSTROq@C$lwruW0aX#qk-YnxK8H> zHw!#`jFjBf=_XQx5f~Oa{a_)-ei$&AuTgrk;Fu{BoqrAlS)sby2vM(P>jNt|rNgh>#=@{8vwQ;2CN+C+RNN7dj;t?ykeFtlMtesE?J!WjV9* z3rus4%J)WW(aIZ8p^48E4n3tHQ9k8b_cpaLHU+paT&KQ&zhG@L^d~+YM|w33YEs); zo?4rq3NcCzHtF8B$38y_U>LwR7r2++O5|Bv z#$sZ13Jk+K41jjkomNzn@>A+j*ifN0KeIZ^$OW<*yfL`NGz?~QZUTT{3buT*ARp{p{y4spA`#PCdq%(!t zgVbI=WSZrJZYhdd&(h!^D?ghV6EWy@F=6~$$K`8cR2A~~Yg!i~=>Q|o`GeD>@AK1s z*Uv*oP}N%In7?%8Abm7D=%i3{BPIHITKaU$uuS!$8KP0af*C~(-(~u;_{URw3*`*_ zdq{v!3xx93adJg%>3)ftaFArB(~d`3U&FxMhmx>t4)wF+v~l@12ZgHeOpelk^&}8 z>}dr$wl6ypRB);DsHO8~b^1t@aoA=_md7tRbz;K2)jSa&9J7=@>-9u+J;6&>r7Fe} z1Q+j@6rI;ze+5kFhp}4Uw>xg0GSfUi8Zhbz}Y@6}@->kHZ+jo_eNB zh(V%q_s&vwdO2BFfGpWxY$G-%v(_2hc5_AcDm2Jepu?qKUkzVEKPk4WM>j+2dM@ow z8vq`m^&8RJX*`fav$SU)?UJt_67BmEgZxsQOvV2JJV3+0J-Z{8?Apzzotf{|zIMm{ zv!jhM>cxsvuURNkE@|ysfs8o<_zT7QN@VBJQPZ3}3lcCuLXJ*(Vf-n-Y6LJ=XrD6d ztc1sN0qxRH0G(w}9yLBmu9JSRk?N^2Appkvq5mzs20=JsXT)mCPH|p0tTyVyWvdgg zFNy5FhuyPMb=0E4S|_06JTmFIA{Aep?DP~m+37hq-Z^Hn+1lxt zjM>@#ipY5E0K9@)7GY0>x+%?jWiTetLN0y zEVe7E>1ZOYDLtsHRm(ok5FV|sc~;NMl_AU6R$a+j>o`YW3Kwcu3mdMoaHyt8>hvJi ztWh>ls2=G!J$JBCIlEm~jLh;lFuvFj6jER{Lt;v4rIl!cMM*%Xx!m-4piw}Fxh>dAv%`Oh{%GoMl%m&=Avcrz zha=aWj=EV2(W6)pt)ZS4nWhCY?9WY&>4|QM(#Dh+q|(i4CW0erg?KVggqHH&GZrj>>FO8onE`P~>Jp5+Qe*(xghpone*3 zu1DM1jR5gVrXYiMOB;=6>H$|z)2x)cOke3Fn~-#fv72Fx=vyIaCjK5x7wtYu7UH2y zLT24kfdm$wx}YVs4BMkNA>nVV1`C;nts)i#B-$)Wy&Zc9@e*t@B2jO_27`#O6(d3f zQ70iH5)l(4vDyrxo=5_+I*Bd`ZwZPf{sW51Mjs9JdX%( zA>}GQiTJA7Gl{)M} zh#*o$5avbfvtlA(tb<&{U~yv6rqjDcLB!Z>auT6hXE50Xt6vJsSTIUh@ClI6sk78M z1cEWI$09;bEVuyMDLC~9Yl2At^On5i86XGx%Y{aA|c5HRqkDqve$iyKc zNpBn+=_%prn2e*^$A7B%LVg zWb8%&7H(uS14v;QdcBtj&=W}%3^t`B-iD(fdyIE)BbuN+J z1Hjl=s|20iY}O0NVkM%7POR0$TLmwSrGY9}IG_Rm2jl^`t3p2+aIGK&TbgU&-=>v>s+%nlBRP1Tm*_D-F+c#|3O2I|S|Agvju6c28f}K4-G;3MQTwF;jYKaR z&B!iPI|xqze2HK&#K2`YN;M;x*q2|8Z3>7gbgv0;-zr;{WR!>9^6WaP0KdH^d8 zVS^|P-yVJh>H%cIL|dzaX{L}ypaNJ{SQG$?t3+72Myw~i4LU;%adVx$%IfB&Y8}&# zaGi09w=$Z^MKvKyD89a^kxS)QYXQue!~|#K*taO0lHl@apQF%FEBv{_QmUi6UQzI| z=)?FePs_XaXv#qCyC&Fd>TkX!Jb07dYA@b}{2r1=Hc~BCd~D6bXn%C-9nWb@rC_bG z-gs|kjzX! z{0(PIY%gm5;t%KYP}*An+WRJfV{)o)schzsDjc(KMa6}i>~*TltlOR8WL2ggffBez z{#Ok(s$B3f!*-nPLw`W;*ECS2V!nLOO_Z@re6@? z_~N%!=oLKu5cbuSvwSa@ilceTLf3Y;3y*eQdwYlAQZRPiL&yIL~}Uiw~k zk*Ck;F=Z3DM!pQBXD3jJ@sy@YK~m`>Mw-nmD+EQg@t_%5tU%N!(B=0-r%N9Ux?g=l zed2yPK*f&%-H$GZ0NH0U#poRxOM@mT4EL^ow@$B$T*xrLR{r(-BNu zi3t!xUR+Fp7e0N}9g8;KEcWf_nA$7wxdS&2AG+~?jy~~bP52Q56fT^HE^BP^L~8CXSa#ff_m0%s zZC6}6HP)1Bg1^|*ORw0rR){m%Lba~=sqDg2^A_GDY`eQA;%RC`>se$;Pwjqjv+yAo ziw2^{|F1O6x^s;(QIsPOiO ziw`Wm=*Nq9+_ZH0awvJUw`k)s$839Z8eDMHKnpdgNI!_BUBgPXNXota)ag8Im-lYP zXu`=S5$c#Ru>MfPZO^0JQ*Xl_y5~1(zx5=V@WQ>_ht~J?)cyqMjq72}nVEilkXn6b zP?ymp`-_q`P4pNDqG-w$F1Vlb33>@xcyw&=D&a#f06BR3^}(H zmpa4Q6HG9d$!ONIZ^*FgXohW5A>rbrQ|4ltnc-&SL?TYQnaLn1i~6Xw6)1#RaYqv5 ziXxZ9jQN8*Lu(}(;|y&?r~O2z&6#a>OJUwMIv#N1HH-H=aM#imMrqBWJqH#~)0=nh zH0!4=KCoxe8cAqqx@hkMdls*eAf@ga{AG*XX3o_L#D98Kb9~{dE9OMCSM$Pnb9BxX ztF#xg3wCJlJjwJ9RBSVgs}Y{d)jsv+BYv13Jv}Hr}V^v*_?X!fW?1+PP83)pHRp zLBA|9>K>+eLYA~uT=sNALP0$W%JdK^exfs(E_=km(v47Ih<*_Q(N989y8_cXbL!7g zQ-M9di#kxZRP5S**amTB`oZKQK!7WL!IZ zmDlV1z-YA3)M{L-%V2h6l@rl*#YLhM*Bk)7r3FnQrOd zxmsB9{jh6qm1n_Ui5W^N*NwjuIh zDv_kvrYJ=-3Ht>H;g(Gc*Y{4IG`XhfYM*XWShh{Etw(b&O>|=Qkl51O+fq~29J&RV-l}mAJ*F{yQYFKdO6j$mz5UH5H9OeJR^BrqBbCImq)JXt=8jaZOE($K+EIK zc*=uC)4OH&$jE7TSg_$lm9cgWTO&GRuI^0ksb9KiYi(OC!kyVp*^H1yoEYj_e(}0x zZB4EAu-zqDf##O$o360nC9n7I09t=ybhcawZ^`QQRhApfQSlx1PdCr&2)6hg!LYxrefHz?*Bo5hG1V19m@G9A zGgi!!*My9s)hES_vU=xtHuX18X`dVjHn;TkZ(r~Pn)`B9_|)yCxp8oup)A8O_L~Ct zaZhO$BP#oDALAc8HviN9vGtApMkxJGdBrE{E8L@FRPNkypFCxyo07Xs7D1pQab=r^ z=-#qZ9dQ!Nc%c_eP*E6~SNVlex(`>Md8}xULT37sP1M2%5WXnP6tILut>#!upXKY!LZ!58LIB^o^PRM0)Iu4MVKth5Dp^$Ke0O2O) zD$tNZxp@h#+5)BA;e}FKXiZCb3oS?6mjbc1`OnO*4j&=B@BjNgh_$o3v%531vop^# z&-46#c%*0p;51w2hak8?{yi)cPo5NG;)|lla(H|4m6aKt6SG&l{pcpHlmZ}-lVPS&85{;Y5Mk9GhZqr%A{xj4Dn9cH)-#oi+0E$s3k{i#|D_Sb=hN>&lb+Gqn>Haxk@WWbpmY z%4P7Tl=$Iv`Fw}A!nVHoiN8$V^<-b~6T8nUpEbj1V{|NMseR-A8}GlouNha)9<6Da z?_BA$Je40~ymOKN;cz_&|7qSG7j`!E?7D2?+S|RXPN=Xrq}D};-?{se2mZdW*}r{Z zam|FybEnqGD_7r|4Mfh_w%kNs!`O*FTSQRd1Zo{|Txv5Gbb^s+Ac|xhTf`O_DWTFg za`NH#X!rQ}u~k=HwQ6Zg?>RU24-E9*_X=2i?z!io|A3e;!@?b|&^~8fEO5)?qix0UoTI_``5>_HnA!vfJrG-6}# z__6%cH*b``e16-u=Yjb~;Cby=+aKO_V&~2iyXIbbR(mmr^s2`V^r{nYojCCp-1w&a z>{B=+CNHoB>wK0 z);6*cMUUX2|$Yqei7s%w7PUQH4LMqk(gY+B9 zn2C}hcm}8#3?<14jMkZu2w4(+7D-DWCDmnc9+28d(Fx^RQUw(O0RxZ>5zK)U#vDii z;wvF34*ANp2`ULOLVz*LtgAvBV9h@FASRK2A1TA9oP-G`ugnUNpaZ}JDYNn{9Db82 zd`Nxn@YtFnii-G%Z)6bjL5`kV`(aNyDY56Kldwmj&d$zvOmeW_D0!Kl!KB2zmd`_i z`)7(#u;<((TU8v|y8dfXY`-LM;}*V2?)#xuM-dgOC+@x(5S zMw0vP?GDD_flZLuzJoCg9Y*m2Qw~XBK?$+qsx(o`LU~04=)1gO%J~rhBIi$O_z{@e zP`s>^o$ zAq*DGIv9}$6MS`1i71v7Rr86@oMqRy&Fo!H-uWYFJUfTP{gtcu7Iwu|7kd+u6@7)G z-e&QM=4#-x1xSb`SSCLSR)BT$;GEU#ez=;sR(@*sg0}fKz5Ems`#~qPmQ7jLcJxj9 z+94nPM^M|ja%JbVv(Fy-ApH^)*YB7V@kG+^f@{H-a=m#o>i z^L13l(o;6>Z|rZePn&NTXe|y-^>8@emsO9oG9(NI)f*T0$?v0`HQ`8=zRDd?d%xLIB+O2nqE@Nq-+*_#C+VvjV6VjP2Ityoof&i9| zl@;7PM%F!mD#xo-8-mf`Il&;nma%exo+UslhccOUA#{P>uGNy2G9$W`-i>amK{vNS z^ceK4(OFTc#>l$o6jhGu63$_GDE`Ely%k$Frsra-v%;Jds{%NRo%nlTF5!|9IWit` zz|1RlA4`V$9V7`0GSDlVuh($y+A4lc^K!Gb`_=r^H@@gq?@&^Iw zYK&$D&H-ItUIWOP=}@IdJ_7c*Dh0Po-pkHto^hbGdq(pXLCNt7*=$$xrR2ds6cv2{ zxF_*VuK7}aJTopRm|J!{|4~R#L$VKsq~~J_8huI39Aa`{To`^}I2soLiSCkn~*E4ZCWUitU^n_ih#+p}bL+c_al zbLHQG`1fDsfV*s#F>t$n48li`=GGu^>_#KCI=>d#I@E>mTlfwX1@PVY2}t~-7t629 z|GuNI=j?#Lup&Bh`Yk|r#~tZAF>b=~GoUN5jo%AZ;Tk5{`{>#^H`mwCvr5G}q4&{O zAN}k8zn=kWVep$Xqb%&Y-~<{Uz$uEp2#sMr#SW_&AmS3M7$;O`cr;4TK^*Y1UDT&P zG8Qp9i-mbX?qf8fQDlG3IL% zSqbyGKjsf#4@F83l21pHBaeBE7;Xc(30}eTvH4UKL7u8FRYD4TWQwfFj=9%W2bFyi zcv#v4F>+sNeSSD%DwWAS#$H`lDswG9n(C@c)#qfB6w+pAQHxc%DC6*sk#j7uT4j|H zt4&40@vkDydUo{!gz0#)12MAWfB3lwsfB=hMe~ zZ@#$~i!ik_XV$_FeaI;3s;Z_n>qkNRp}%n3!eg(E4r`$^8pCoS_$Dw zER-@?yNU*B#BQvCus+3>;v2PC;>*Txw+tsmA*=T^l5Fw1yPU-AjA^o(2~(&J6eyS9 zfmF`eQeVoTl+A?af+Swb2mQdC#fnXzi}KG;lXu>)EYoAtiqVATgPyEhNw{FlR4KKT z*d|F>xvDdv=2xQ{tO`?hBu4bzxD|W2WuY;!W=I0I$eYXjVR!Nmy9I4#t+{P;P1n}i!dTGl z4%QVpoK>|Ib#)cBRZd4y9X=K-tlipGv-!4FM>kKHu=yw%{}t?67l}b3%hWmBkisKL z+$GF;xRjw>pt=HQW<1$184U*c=UOdD5UR)?Oom8MCQtSgl;0i&MH2L&TA+VAln*m5 zCNM&z1brE>NV2q?g@nvt1QKqdD2V|s&sl&nwk%8#$bN@inWaQwfZTWhlTr3yGRhS? zn6Wlrbw0K>-wx=eDJ%L8kK21c>=8uJL+m{LgaNZ3RcnReZDNDo`+nSGd>d5!_+abd zzOL5d6Qj!*CXUMrK1J3KH=-g!oVJYkF{l;p(&ZKQJIdHE;F_TP27@5Vq>Vw3B!70A zLT38A8vnJ3>d9Gj*sQMx9Y#z@|hsip2 zD5hQ}q_}P9gN?l%_QuJZ`ZrB!DA)%k?{M>e)xX^R;-NiUAnAB&aomSDmXm12~beaIJq-laFD z_~Mf_A?5AiaABKrhDZ{%*|3Ev4GMhpz3+!yoX*l5z;5rp;^RPbyx51+fo6-2bA{f& z7awYvf?9`GoDLGLD{b=jBOiWvWS{l72MMHxrvyoHqI@1%y*nhLoe~ek{9p%vYu!f< zUTIs|ike2{`c&+ySep$hzENxr9v$gUk*q6}ilH9Kctpwl1l5u0AEJ_q3lyaGElr?< zOcH~}?ORHt^dOSA6wjxDq14iSEVU1{X)Z=AG9p6k`$vV*iSHQ*_PqkX6xlGL%JzQp zrb%UiPwDii!92B z#X^zeXqY&@54+m2sdN&37DHd*kAT*r4+Sdlusy^XuYY9vTf&(E(dbQk_Z?U4zDoRx zgk}Q;19vWAG_Z{{vhx-n=0pYR3~$K+}5} z|Nr{>GvyyyUyKND$#`3i!eYX_(pfPrhu2Nz(x>v$^l6TtF8zNaKRnIx;bq47skm+g z7>mkhe;>%!^k1VZo_8$$uQ3jemHI!GQ6B4H?&sw77<6<%5#aLNf$<9DcYHHXQNO3Y z`hWkG{BL?`)-NNkzZQTD-#{Qb+}o%HL~Nt+?IXUd2J?TVcYojBcM5C5XdJ|8r5BP@ zdF4r}_sjH6kU*m(=D|t)AM2xM=ut!0Gf6KVu)Tvx(y!>0QqZ2BtYejuuFQQtfLtLD zgpkmY$nuzD+iNpM2Fka-5(w9fI46!In^P>%&wH`W8EtD9STd{d-A;M0*;e zifKh!OcLpbNe!m@bJC(09R&Sj*XHx@6e2VD90V60TPips-~);XUQS0NmH;0JW2;~^ z9F1c`W;7mgprg?ysQCJVh=WDiI-dmchjRZwLjL_E-26TLi9~;@$Lmd|Qc173Cx!Qk zFf<7S69b?pc~AorUi3dw!vw7t^bdGbUX3&9)S&GE==W-|BADjV~aZN6xnv}ZW(i~Eq6gz>hgM;SCRB$G!zOnAY7mri*TINstE6`d|8QmNF3M?fNx zOs2d;1H(8|G4n}|E_H<8qXG{?@DE4f01-bvnac6j!VGh2zU?-p*sd@IM#hGP2Lu^= z0nq<3!Z&e5xxNpV>saNIQ%c!V%CnSGB}SG^A#+VAr5k<$Y#d%Nh~(@U^uL%0lH$f; zjdmm#F0Td5SO?)&U9HZgldE((@D@tc>U8oBupb;4^YAf}B1h1Vl4XayLpSzeQZ6GZ z*MDZpMdf^3a-6!%SO?);{BY&I`_U7~O~G5JTw@)EGnBHDz5QUnTH-3**oSesW>8l% z5oYeN_8QI)A&zyBiJYm{!w!Eos;Kz+;QTQUQ%bpxp>l1_Z?6#?6XIA0QMpcA-7yZs zW20X#%7F_u#$h}bq5cK8lJ|&9r3EADmQhDia}Vn`^k-u?78&1A-+*(o_x#?S;B;@B z+;avnG7);Na?k(43k2t$?w#O!R-$`u&6V?eHa=Z>n&wpP(2Cqxt>C5Rqx2}Ye5)s` zk=M0?Xxg4n85#2U!4zHy z?N?x%`sqz(bHCXPC z_aNf{KQ}za}--K*7MVC)=<*B%t6N9($#_rVs$xPB$sFlj;+&^LXkdHKHO%l9!~s-|}Z z&}{F%rI__`>Aqj~O~)DK|5BuN#gLx92H$Y{bow9o(&g!Ul#@zGg1kk!G9$-k`z)1@ zbis{8B~g7F^E%@&{#szAF{FYDVv7C2+4AB3S2jz;E1}WxV%lWj4Q7*tWdp4%H{WvG zN=#ZSQxeu8(FYHIeRmY}|4{xj?{{e}R+Bcsb;Q^7Z=WA4HsF|Dk`4c06j%A&A7rs) zDe~RbP>b+PAOL?As3R*|A8y| ze63fwBj?<^;rhF8*th=P4H5ShptpNoN5{P3KNnr_fK9KrJ#fLIOQ%-~Lgn;Jf#!{i zW^8H>XgO(I>*@)+-u&#yoJHH#&YBnS&Y8J(+rruX!@nyBehccjhrgQd9DNnGB&3R` z6FKuUCXF3Mpfmu> zxte_XGQMnW?lx$+9`W6dT{k;{@l)*m*y93!F8_nNX`Hp=)ml{-xSSeXS2_Mat6QX? z+MKDD2Hgf#6>9&tb<-2y{c>#O&-fwYF82MalnlAjMBju-mmK<^)kHB0f+zk*g;(V~ zv{7c6_V2es!i@0mDlt<5e>lJ?5D>mvIw1-vQAi4+67i5p!h~8GbtAw1cIwdkhf;6L zZ-a`r>EzoWHR>9iTt}*-dUz3>@?;WJfCm6(F*jw`MetaR{iyL=IhR^NZJ>5gmy(s& zd#J~V6(7|J4F{+m@w{|6FOBk`_lDA_7Qxf!IpguurP=(nC7X`oeTlG>jkF1vd(7xx z(mY^B|I|H(G7lkvk?t|4v**bMjJ=!L%9OgF+oIcU!WVptrq$`uZwYoLM$iPCNRBV_ ze$!u$IwX&=qi%q*QUA&PB%c|_pAIGQAAS&xe-)8Bp{~{0sWNH-mew-9LA-_Vgb-{1 zFv4u8S_d=HaoEw6$)ZQZiQ8)?Vhj!L$p`n(XhCY(`;B|nQZ~V=P6v&sMSb8_;J8$D{l$4 z#-&XL)+}0a>`$idEb75!R4p}`+Je7Bj<>}m@{7{pC>koYs5xw;QVtuc7dnaRYP0|U zY8E>2#4E2o_R!n!(x3e8Mytfu8*8O1S4E)0?r=$KpV%N-%W5t-_Tc_X-wlHg{jb^z zI#cE~&-8#tUeKKX+(x1~w*oR%)+oV>*88HWBtV^qr>w?O{6C7S2Uz~}$FhQw=2 zNG>7k2PFy{=ZN(KyLDvzDeN3;K|#kl&d58OO<*DoWxy)ze z`3)+^=&IGc)4@sdm5jsCYBVxnyOMxck6D5JW3NOp zzLQ^}i!F@9$m*3ux_9i#<$U9xrEC~e2iP+3G`K<-w~_$XVIm5}Pg2D0dLuH~&=Zg- zOAu@nal2?-Sl%j0oY7w%E#x#-jxK=ZHzwY>Yj_@T+wlj%i<2?BiYj|!NAOAV790sM zqw%KQyXy@WpmBkN_f45)92}8PK3VwlV~VT_PaWg-umhBiDn)guL~T!794sBy0*T@4)%W=^;2Th|FW3vyNlPiKv%AwNdq5{zS;}a3izc4AXOId&HeiPdcSWfV zCV5F1m%-Y^vN=SfNj*XE*8-nn0nD2De5x;nqUh#GsN<;j;dMOX^im1urjzLJ7?aGH zDu()pSuW_g|3>{qtNof7c2L&ep}(Fy>jvGEXW{r-t3|p0J#A|1LRVSXLUx_x66R^LnM!_p>J}HsA6^_PFKwOVDp*{H6?b%quFIumldITL5G-q+ zr5;qU?vo^z(}=Y9Ad+;KQoYnRYOl%=tgbxTtq#Q}miV}Y^5jJ}8>0}$;96)0)6zg*EG!EZ2psuQ zo9zo=anEsIUsx!AE(UC%dtUmcFXS&&I2|COWAY;^Vh)&TgV*HUCjC$4*5IaL4+Pp% z6zK_oY$AE#xC11A{{0#OCrkw5>^hKjV{d~$*O z6We-)G>Xc*<$c2*hR1^*^pOmab||9W-f5Tsj=lv&2GD6 zUV)`JC{@nAKHzSwE=v>@oMqPR)_IIT*V=niM%RY;d-h-+t$gGQg{C(%k=gJ!OOKr0 zlFAxz$dyQBsIXBYsc_LKKxA3i3y@R|W9d|gSxXE{O5iJ`R-zwImUm>tLnKWb5Uz5o89GOdB; zwb1H3c|QmM^8+6-A+14cDEsIE`78Oi@c!4`g<_(wy{)R%7pe*C-AjW-6LzesU*6PM z-t6mE<{=jQkkNZl-8#Qt-PqIDjsE_1`+Hhu=;3wiKIgnECaqdMjX87G-h16$2}aj! z;`;W+j&L`r7eKn##jJuiM+LDDyB#mXkRA~t^B7(^O@i(;B|pM_WzrW6B}0vAD%561 zX&R+zlqNWPOw>QUaEPiH=SN!xZI$)D_sLk=t6*di^lXeLYxDD%6ebj{%f%jJVjneb zpc?qY{-_0GWMDxT2QX&>mI*Bqri!uQ=EqnY3IPyO5EjoG*IC&SJkJa4djG|}RW0)Z z;{xZ*o_D?{=&1^JuQ;p?YK;IwSRAAeujmd|q2uSz?>-0Rn%9!}Yc*h5;0#n$+8b)R z%jYZsPtL}tE(+fqW|7#Ti#7y1Dm%x`TD)XVd3Q~Ny|NqsL}HZIjRC-J|FYIZVdtj1Ra>x;1CUFy?oR0eeqb&+2=e% z$~&q)yU&x+xIagyW8NZLd1w0iEzZ_yoa4bRW|Nh>@_e#OrLeVvlUDzJp`GK)pdB;>@7<$p`HuiC$DPtZWNvO@KGlI(6RZ6DEme z6}VQuV!a4^0I$V$D>>!m6uV?)u5Q4JrB@oW@DT(bq-tbSxcu>02{u0U6G0U?Z+dk0 z7Aq9wB(F8-6GnEv{9p3lX-?24EQSG{8SLumJ`UyqRLh$cqmmiEds=*T<@xB* zVHJ?xp;f`(^Pdl2LyuE#hi(fZ@@u3Z^yHDx$ECtWQ;PW-%7?Ew)AK<*mWg&zAn>&# zp3hvJR~so;NiebjfYJgZ3kyaTV2pQ=X?|^{Ax6G~%2D-FUc$(w<p&={&Y211-(yzcTTRn`)<;I4W|;^f2$aBJ}s1dJd5rt`Qknxu^-C+ z9(q4Lc?uX;1bzrU?iiff$UGAooQj6GSLCmN9<09puDifoFz#n+TbX%j92DwK-1#wM8;kZc8hOXTWOdlrk!v(g2;SK#-^cux!keFA4IM5Sc;|DiJ&Mc}6jWbN6Y^+S9;oR__{BE9E~mL0O5f<*Tuox#%@ zr7@25ogU>&ovbe_mhk0T9_E1gk&^W^o|L?To0L7|qZK6_;V~BcuGxCxX>ty!CxO z5RFNr6Q(Vo7)uyI2+byk4`} zVj6{$eA*oOvW%srAmjK=LgF-BiGv^}^XxTk(ofBo)YkiHV_?8ZBLf=sjg zd>Uh|;;ZU#ZhTc8z8+pXv@M7(>feO&Z3xl_g6JZ&vpcw9Si2~?|HzQ#F??AShgo`* zUoG)oRhAfrd#mR7_wxGouoZ?g_;uk0$|17mLn}ybIft%fKJO_U$gbDRwS*Q`$w}|c zr$9yHBq|YolD(KJ#D3Q0AO}{Cy}<)H`d|8_Sen8?S2m5t(62RvM5Ckq~2E?EaN1Epf{! zbW=IyvY5gAqdUm}}cfVfXIXhj^SM|VEr3QlwhK4oQV<1asbP(k8~-7Cvm)go_7q?N7BqPS)$?!|4HXXLz(F@M zMSJsH3`aR2f>bgIW~Kjhib5Ls2gFHH$qiSGn38jNZW!^ZQpM{~J{r^vBS(snt;Ad? zI^>izQIb;*(NYSNr8ld7o<{8RIsDDh%L2u6!tDmB;y@tn9p)4|V*DCWCS|x#2Z=M6 z$x@n5mRdvynk6PmAmP}4`Z9rg0)ap=NV(l|qFDaj_b(IiQ&#N1F$XwfnG*Q^0p(f0 z&$oq+=-hYZHKhf&ZTjyt8Hvdi^y|ZUj$FCrjxFn{oZky-NFdo8;7(Dv8@Eg0 zEEz8q#6KSW!){H1?qWTFTDGucdDpw5aH&y}FMC1(H3n4ODT;mz=?^Ovp7pGViM<%x zFz}OOyaLgS*IVgul?EH?vTIG4rCY6rN+pS*h3L0_bwm^{H%b$Cb$1l77SlT3Y|_Hb zdxOE*yF9_}x>&e!X7$8zRRxyk?~sg_3u42D_GXc@7-nlsf{}K_TNjqCxWG~toL*HO zt?!9X3cA3GTRw0-j9cSjZAE3oiJo=24njR#<<&nx)lnU4ov=uKXM52*Yt6{u0^sc`Q*f9H zXPt-RSpg=Lk;5~g;N`&Xz}A|*qVRy@?H}C_N(7z8_Di!?ejQ_dY}$91U7k!b3mW>GYNjjw8r7aOGob3_51*en?@!+BA%Wv)m- z4UwpU%8R6RUqA)&S7A!B-AxfWYB9nxQeP#KM&oKE)6HzT4rk@yl7~>IATf%-t89NG z|4gINiNBC^?@B@4IR0lE+s`aItw#RUyQI(k0r-_IstTAU3hRv0d{O8%N^qjtY!>B( zp@q&x7I3d*7A)!KBxA22&Xnir!IAbamYEF;_}{$+Dd>_vvI)%BaRj zd;4%yS0C7zeo1}^d`lKAdC7Qx#zdX5TSNCt^tzWWk`v%AdCz~JKhlv69k>ydeY+s$ z@egSz1Cn+M&}e%e>KRf%vRfT>F)8kI_#)u|K7f=U<$$6i(xk`G0a{^_rn9BZjfZsR zz4)YITRTr@7aVwOtB13XOa}mL3&`(#!ChAdCW9k0@1Bj0Z1lf?;3+#Ur*XLp1HF$IGVpgX!?{~3hfpur|&OJ_kB{+8(>)LPD>DVP3ahB`+kD)PR zJ}5`(GlLnv9!e&YX{1Wa@1PxY=vXr8MZGkAv(pKC(XXI`y+qblR+hmclhNRmZw9?i z<=0>|$q%R*uzp*AiemnX+A%^+C745YOnf3Rye$y*hiw6iAALq~Bn4R_p@0QDC^~B6 z(TFXEflxg(U022U2?%LzD~ET`)PQzcIp$jN#_ijTd}QXfi|5?hU3RNDReGs-W39%_ z>5N?)-%j{$ol|=2tew3rCp;BXnitj1(r6k(9W@iGYCO`Ef|BOi&hiO7+vJ~E(G)5X z>Ex4Lg@>=4a?a#xJ9BCf3{j`RQxR|ofZ~pO0T}ukel^4wH=Uinqols1z`#NI$AD%H zW|zMTeB+Dw96AmF`86~>Xaq-bm4b^wuqD)ZNo?eIuu9Be-jvKxb^+Wh2gkVTOWmfREs<6p@(we=^m8 zsqmQempb|9I-@}^r|?Q#iukf%x0jCe(_phfi%HWA;$JU-ars)#q!+ZdZ{CszrdR)~ zdb<4K!>_Q8W5G+u?iE`;K9?lTOBOM{mv=0Zyt}^4zUs=Gaev)+L zB-xQk=L9LTbBZE6=(lIATIWH(|MLtNc5A@? z5p^Ec8o74zW~;Jgtfl~4&fEZ`&$F+qeZC!g1P6(cpIGis-{*r?4DB5bh2x4G8V_Jz zLN)3Me*hT30Lcj0?E>?WuoD+G)wOnZ)J{&{d74Up?yB$JKB=|JDTYnvU})YNGqlaF z==;IJb9deAk<0G~kk^Qx#q1$aOy!qYT=4JK+-Jc#O>q2yHJh8xu%E495x; zL|>Z~lY&7WFE3Fcmpd4AyF&dTmrQKD!0QSz{c#grWwDsT+Q!6XC0&+@w=bNrE8q&1 z6gYcpI((u_tL62DR>@V>S?x1vfh38vpkaV*<`!bLLHC62Yyb!PUC>tH?P{rS06jp$ zzi9|=n$!i0-L7%~f-ZPTK@h?%iG@C~Ian61XtqkW;@Z+?k2BO&;pd!IVT-!vkH-B3 zi7|7lIE>ksH&TNS+HFJ|h7RlmL*R@t`7cyxjMXN=?a@SI4mI+}TTj;z>*HYaO!;q& zMxaH}3bZC)b!U}JvKH!jt=1*_I%;~I1tlR@VAqU=w@GAhvNl(Q%Yx0KZ((8!guw!Mi7N;|xyxM)yC!W4 zHlT*<@?sSF%vy$)*pbSq7StN6sf($rs5_}gsb3IY6YLp}SIHt6S}lkKM)ZG_MSrRh zFQP8rTUgac2xYu`^LYt6sS1AS zCH)ME_k1`&z%XqQOms>-wvf1_EZkur4vSijfLe}G3wSpbSRy%0p4dVj7_I7W{I0HWjX@fgjS7fsmt##Wj^E){pUy?{bo1~jqeueyZ z`Lio3Cg`kI-GuV}FtooMrPIctuN`xPS5<`MT1|LQ4?%<$pS%sTepn9;&mIjVl44-Bns< zds15@*u~P2yXlf9cPLcU&^00A0tTC&uD?AJxxFq;|731O6KgWDO%)4|Ju1Vj_1;^;2^ebV9-R=m3 zIcJ?U)VM)@Y5i*8UA)-i7HP0pW2hP*1IM(MSZ(>@#g*e@7A=^w1PyCdkGaF`9pS>F z@T93oQGx0H1q?V!@$QB~D(c=_`5ufXT>56Wz`7n~zsSmO+~EPtWX zRUdmVy?%T=?w)Im=t?FnTsJEii3DdILz}4Et)+kQ)}%>qO-?WTbX!w5XR~qLO`AT) zY2Iq(QJN9t&GJ8hY1)Bx^W<+QKRg><9qN9#8{cG(Y>c-Coe^+AzRm~jY`uP>(gI? zZoN)t|Dwz(9}^)c2>-)QuMy>GResD{fL@`=R0&p_Z9`{)^etA4sS=*&rLU>XjM2*2 zBxU(U@OlrnAlPWmfxWQefE)pKK=xu`fW&aeDC5f>Tk+GPhS%(VUaQrZpDC8;IB$8@ zBgt!!x^4A7E%F+zJOpmh{C?OXH4Q%S>kXFQ0{Mr6U@W0$8v^MtlzjoDV1xGo{7>^0 zqcLkJ9Zxa;MyXD+hA-7J#Q=leD{S^f08?|CfPnM_U#O%SDl-Y{*)1SM_~u)=NDTf8 zd?Xh>^8je*>;zuH=k$66P70$^0wD1vf*^RjP9GW}2IVW>klz?zQ&JL~;2fPp@Pa{b z^T{+=r)3$M=5%I;Yn1#SF;BXjouuz!v7CAnHK>;x?@TDeRxiKa%Zig=|OqxZ`@T006KsJsT{LMft~U z6__JC>l7)U2!vf_^WZilWz^0DjSle^NVcG0`i z7x%zRPTqCo$QZsCv#51BFP97$Z3gGI#2-R(5tfcW$k&Y#4@G?$AJ8|d$_bN~Mm^>tw{GPWReo8)X^!-VC*mrFr zI3FYZWg^+g*G#kup*m8&G;r%hk6d)oBk&Qj$?zB{U*OOK_?Y@H|2YuNUYG}5^05&u zh{S!vT(ziQ%jdz^aycqTm-j*)7#xX|a7ccA06vzU(GP0IicjulFJbRN`UH-yY{z{8 z*tsx{Gm4>iSB1%P(Mv>cQ$p{#ghjmpJ5D2MQ6ljWNQR`*{M81KxZ?qw#1Y(uAUe$8 zGng|YUczGE54u{jJsK`543%`oHwrJVY@1Fq*DqbN^CRojiW>O?`Lpt>gy>lsZ~o~0 zw&>CY8k4c2WWgIRtgD(bCt)q{a^fFhe89$;pK#4*E6ROC@~z(-GTDqQ548cCOG_8| z>q|VlkAq!c+-=Qf0Pkz-@>=H1v51By%Z4o#g%?g*lGJE!hCAH>t){w$*ZEzA0WDut zsL=$5MAw@3PV4w;+M==gqk*31&DtAo;QaOU)A!3xPhFv9PsqK=P&Ce6r>%Wy*F#fX zl^%~tUnK??R&`lh2@b6Ct~6w{Z$vsdVYdzuD&kn2gtL=SeF?V@9y77>fksuSE*1)- zkH!QDhaqm*80J%8IbLaN4~>p9SXU8835MNsO3Fcbc-}P4qJ4cdj8{&+_DO4dxZ<`4 zD?;ryW0l|Y;#GoYqfHGfmL$yNU>n~ zf;7#C3z)t>&Twn}YAKo4q1 z%tL_cz%gK`S^d}^h=-Lb8cAYN)Sn2#pwH&BSUso(=|{R9k1XyzwrQsCfvHpy zGye@{$d4Mm?c-;@@mZi1!1|>ZT+j%;@46N)+qkfj<>f^~>64zis0YA&JHNsp8%9%G z6^vSZQS8ux20k7Mg!oylV3aL%Q)@+2NnL>sfK$|Q4PXnRYdZFpFT8Elq|3qG`RzCT zDLZhKj&p!(egP)yDi-uED7a5v-mtB20tDlk>fyFf`cwj@QQa|Wk9};F9)4vu%6IFG zf=<4}sL@(gyg;P1ndPKT2a;wvarc>G+beh~VgMy#Iz;`I%89aqcFrrX!VE8ju3Zw># zA2Oi1lzLCaEQPnau&^HR(=e(^ z+gN5N8lS=u3NqZP3elazYG*fx=UtMlS+Zb4%k0^an{T{+^X8*d*Z2A>SFWA1V|iWO ztiXf=@`pv9wpc9KPEViq2%ymnGhz4c=e=H^AMLRJ{OHg@kH_zyP?BhmEZ=<5i_FfJ z>C@X{qMp0)oDJh>GtC&X{`>@sT#*haUSPB0t zeJ+fqcMN^L8{SBtH}o;Q1G{xAxU=jYGT#>>NpuF%fhejrM&>6*-LlForgUxv%8~?B zwqSLaEG~qJjSvS~V()tF$y$uv7;vCCPreNG!>F}`54;YC*A9+*?RKwYXt1ogX+d){ zGb>R!y?H_Nf#&kEW-zTP0e`$9IkYNy&J^BYG?W zDsO5+^C*_Pz9pO+Cdv;qNEHZz2Z0f{=dcESr;P*gENxUn`)gEYzp&14Z zSmQcXDhvO#Dl7$d^9B)U z#}&}PU+6A^Kx^T39HZwg09c(CD*$$_CJco~5-0Yp1rtRS-kd zg1Ml~67u`pb|Zuwr{|4y;jEb5R%WMxr^qNeW@#YcG&U~-IfjL>q>3$NtPg0-bg@TM zCRBwPBL`@!uIhrzDja$PM9<`Gv;#s5w3|vm`^@xRw4T#KT1V4*8r%c57LL`j9HfOZ zQLBGkXP`NTp#??*W2})jX|*g3fetc^M$iDW0OM9WI$?pu?bLIcYHKTZ3smjs-vCpgN>Y0;{? zaC}Flo-2Zs>Jxcg!!kMXdnsA<=A= zboFPIHnns{$LqshpN|%RU~-w=%o-p8&VY7JwBE?cbAZOevKl>VUmdN%FC5CZicV93 z+gzmc^X2UL^Q_jkySJ4>rgCRhxVcy~fYv#l61#1JUqgEUsI3F^!~)60GYQsHYSYr1 zJtm|;@(mLKXec&S6hm6C1x1qG1IkJmlVETF!NqDECOv=_V9;8$0*6XMbH$9rAPJOV zOb!4HX33;ww2);Pj^=^T>@w(Ei?uXg&^ErKh-$YhZMu-{0x8vb51u#yJgky{SX6Xt@Fn=M`wKqHaRi z^3%F$ey!7NFT!-*YhxYOYwI?>c-F3R8z^#@9qCxHWApl^Hy74SDTUAwM?7x5NsW)kvY0@5ksMt`)l#k00_;^34AB8>^v4`y zbSTXD@GR|6=z!5!f(8mN8{+XG2mE}D#q&GbVWdzPUqwcfR#59<9I;^$1Z68BG{8MZf>nuNIEmc*D>?(4-D$J@ZZ1 ztV_2}+Bv1!^bvgsXszwjcTXz7s}LnKCU-PP%RRcCBlNHmd?ja_vGAH1`or-0n$~5! zaM6d07vHwLLofpNH}Bjx;h#5s(Omq+$J75pp9{cs_ewu{+chcHY?J+eeH0i95)GY& z(K6PFx)+VK0~WqC79OM8ey!AUtbbI|)c|uRM`}H^;(LXeh#`)LEe3>J9>>kn89PcV zREW1Y!ZfR(&ta)3h6x!(j6KKP7;aoNqo&tWSSFedmUonvRJf`eHa*nSk=)oGnzo?% z&{=kG_k_sonzGuW+Q@%D*!hEv6TyZLkL>N8(Rr;r_}oTwx4HvZyaV2=og1rg>YY4q zHoGh{oIbxZQ5j!cRou3*vt>zhP$;nr*3xjqTUqICu3UO)aPszpM?UN}Z+s50*LKe6 z-K*@#gLsGN=M_kIc!k8Wv{4--;wobgi4%PCT0&DC%CmCD;+zhK4gR?~c$EF#r49D5swLbYDMy*C(Ztpb2 zyXMdrtVr1JWLjr1Gk@Xm`>lhIp$GK1Ohu->EjDy*Sy9mad8fQv{*}dUtFT*jTG?H| zYwca^-uQ~XzM)SopaEP;jaYY3G?h`FnrFZ`#dc{TGlK!uVw>IT54lbflMIV~Qw*{9 z4pD@d91=?|vFFl4E>kEISBCws1_=M7VucFR0h?qeeoVv2S?c0aG(f9tZ6x*^$?}<) zAC{^wjTHU4@@s9#m6}-9Uo|o13TeNt{Bu#HwB8J;&UGNUt`ksZx#!aVxb)Kh00X7< z(mnWsOO>)RxU50qiK_~` zfzxc2Hp}9(QT5&RiHS=ml0TH*)D4r}o8$pf8ag2>Jb67sn@CCCl*i*OeNZMCf1tm6 z(2Ah)QMOA2w@u<5NcaN5DhCh z&Mh1yG1e?`3l4^`3n!K{<3Zvh%*F}XJi+i`i6gGV&Zd^!_Rgp8+_ps7fQ^hA2(a7=X5$VsO@1*7Q;8+7|rM`s8!Ay49Z#gb#&Hj{N@{js{8$vy_gbF52b>5 zT*Jc}M@GO%ZAp-0)S*s{l@Li8LwsPzVIqk$pU3K-lwW?l_t&S^9{p_ZK{Q{6mdlq7 z+>R+`x4r{|Ty1?8(%9&GL`m-TT?mwYz@#%D;BL4hnC- z1vp;a&B1Zwif6vD^@fv&B4V*ns$iRODb=Q3u6i&MbG~nsAOEP>mP8(!23(u}1*0=3 z$r%pwVEs^m|D%Qo(g(4^f*Ox0%oRI1yNqT`bkMp`PIGj5i zHVSXp%wp8~=PmuXVj<;1x~Aa&WZ&!P|f)F}$^yO}A}WyEI?uczUqORQNyr0TI; z2+fT&8ucAkLV?J(mJPP0zAWrfvr;xZ(ims z&;`!vy}FsB8B-Y$4R)3_Ypiu9b5X3kw9p7SQLAI2z;gx7M$v4K{>PlC)h+N43G|#r z(1`xB)?jlrgG6%3S#`i0uI1=&5+8e`k+KGN84_vXrDw6Gkf(rQtpS9(o9;I1~?Sx!Q-CPV9OwHpeHnitg+vOrVP*xOk;(P;2%p*dJXR7!dM_Fkacr%KcCk9>!A@(~D33l{qFO=^ zPys_@NV`;2${;yL4xtlRWydNyya$_pXWHyy$Lwtytx+iAEgr%1MCG40ZkSzNeWGvU z3Zx_U%cli>FPfWH`aZaaaDPs7^`V7@;|;}yyZ$-kpKKCb zKK~@I`!=JSW%b5lfz>Zx+f(9yX2r6l?xH7}dv2I4I6gb1Y_93J_R`+g_8m{1vlTGO z2Y)avah+g5y#O|~v~4vCdeosB*TWUdch#e(qcXJh7}3+6<5=UYp7d6?ORROzdAws% zROE{5t2x*7eA!|PrKKdy7f<+Yk*4jzYo3tDq|7D2%%g$QVrN9=+@mi%fAqjF{efS~ zx20cw;(k!VM4xyy{TL{@-@knM!fy^9{Dy6j-9z%(tKJ39XThZ3q|4;LzPkz>83KRt z{6>COS?fcx!%ifpZNO_UG!|7kiYF)^Xe<^WHXi`=am8?&#c8$}#G+L!()$?!X*g(j z!fPV}{*XDGWOsTOE$>~md{(pBvROXzrsQ%-$3XeolBvrVtz0nIx8RUA%ot z$BH=%5|!NKi&rjaiTLa+W6-##)Yl22NawlDB`jwZH9S&}gzDI$6_<3taLdg3^SYWW z7Dp}ToZh`-+cn@P-P>BcwBRYw={}Ob1+Gv5c;~nvYK#@r_ROue24;3uT-pz4NLz~P zr)`~FXpzP>wYAll%sV?d>!fL$HecOQ(Aj;~qPde}CKI#N#XH)fjm6M0^Wr%z9ua*$ z^z~Qpj;5**tU+Rn4aqKlV=3ZEZYA+mM8X1!&pxpEEch>I%P=xAf7?2{K^{tfF?%cX zo58Zo-`3gm%-LIkd*b{Z^1py_$NY(4@+s;Rn2LU`YHy#nV@IBxi4n?b)cBw=X-w^> z3GQN&Dv@c1WK$tBeek;iz2G%t@R=U{u7Iy$GO=3L;cTq=WUS(8%ZfQmaRGBwteDBP z|2qpipcWCdVP;f?kySqRouwTmzbk8|xnho#-$z*+sF2HQQNqqFRvbh79RX@7>|13} z!^RAup%=eLJQ$C@{o-64zIYnO0M(vb_FcRIYIHsDekXl^>f^o)$>cUFh9g0VIEJOM zxC76vR0Ip94l)|i3XoWwkc(nVgXFXMaI}|1pIX}}zxnL#^4GVW_>pDjA;3Sg=bi1) z-FS*JnoBKT$feF8-2*kkg4o36y&XYtzr5ZIepPDu2rPT`u|M1fw6{M2%33dt{qeGA zH|Cme$)G41-hGa{u1nugYic%i^xW~M_fHOcpL>7H zY2<%NJq_P+5Z|Rao!031B(oI-bP((?xg7Eib#ojr7YFw-a<9LP%<6pO8eTynea1~H! zjj@kC>McGZ!4Owez{k<#=D?A@K92Vz@e~N49MF+kIv`<)Uf^LOtS=N_hot2e47n?6B961WqG6M}P#$nCuIyP>bjKY< z%X+F7xqz1us%tw-z)M5gZJ3D#B4VQL{7}iJ63_S> z#>>A6m5p~gu~#T~6AXYiv4<#Q^cC2;6YBSYu|(z&|785JVhvHTA|a(Rm&_0}v;jJo z46AOeNW;t}Rd_qp5K=q_f;7v1(K>h8L-qW;rs^4{xcqWlGq1V2%M`z*$ksADUUB>S z+g$}(Kz=?aJ+U^!~?f*yHcfdzgW&gi>-+S|>w>Q0J`lKf_nVIxXfRKa`dT60{2_PL| zXkr5urKl)T5gT?aD7snuT2L3a;Ln1)xVyHs7a()_-}~N72+00)KmY$fFz?;^%6+$- zbI&>769Z*&=?HR_*glK7a&$buXKoKElE}L~AsJqgKU5P(FP2Kt>A9d{{)Kxr*@7n3 z1v(-?mv&@d2GXwVL+Kuy>A-2c3`wM#O$4gJKqV6TgxlkNDK@RXep=ykg~}XxX_&4J zmnO3Ndc&nvfx^c_v_tLSEk=XU!s8GP6uz4CbxqEk0Ec`A(>nj4L0PM^q(LcaA10Id1)q5Mpm{izktGVY2Q2Q*gQ*eJRBACr@puIbLIEL@7DPWm zjku>lcqhI;$s6>={lta0XyS>feU>+wg*6a=TgdV8SP7NI;H4T8kewi2ZsJsyKaS%; z;sXT7P3s%Lq8I`ZsuTP?D{`?0p>G*Nj%v{AB_o@h2R&;uI_84kDJ2!8iU{(6(UE2|vUSj0y=3{EPz<3MEAZkh4?@ z-}u~5geN5)?UET^(Mg$TyH4l@-XwIC1kaixiL}410I|9?8aO_!p4Hbli-VRA!v8_#;~WRI1yY20!=v6?X8MN?3Zmg^1^!cmM}mWf2H#pUM_M2ST>zjS z{Qe8iCfOTAofg0o0R{?YAoqc#xc_go)X4~&` z0@ru0ER4rW%N@18Hu(Ae>YSeNB8%V0-zi?j;{K{A69Jq2>txg#-bq;I|8C!nK(}n zyH_vOCP*VpL^&`hDAAMswTM3r*c@Tg6sIXcfNg>y-b_4v3)rTZo}wjO+R(#{4@@-T zkCk9<&_7_7z_Wvi8LZV-qkmUxwGzFgXw}MMi5?v*X^zF3!S7}-%aE$MaE}!Oy$jsTzR>bSvL0Td++;NVs(S)dH55%@kQ}9 zC6b&R$u4(6flxDj9-LF@ZezX+W#!?k=jO0_^u44tt1`zGQCZEaA9!H3)uJi}Coj&I zxbW;l5SbHc@Ueci6yXI$l@ljmV`)W|D!_$|qywF&CONJ1(w<8lLHq8d9V3?74ZIy( zxr>}SD=)ocDHw4f|8m$~J-mC-aP*16Za1u4-LYhGJHU&ngO7i-dY!@U;Mdq3YucAA z0S{cr)sQ*rPA~X_C50G888F~QV%`c z_X4;U3_0`YBYm4*z$tX;a-trS+WXMYXC4J|bUL@9A{Q>W|J&~mUQvEK`ti{-ryd5% zs&e#gPDMq|Kz@bbeNX}7W?XcSdJ+1V?M>C9tVx?-FE}x2Q|-X-+XGI(-c6HGR;qRr z<2+wsPl|swDaHH)_h=cuk4~_54+yw9WO?vdflmkUNCHFa?10A9=U@nWiX_|&4LD~oIt&J{VgAvV4G-hI#pqgGW-vSqTyMOA{?^xV zXUBdqu|GIqe8~iC)FR?rh!WUtV)HQ|q)h{PbGihv?SMkuCq{n3h?`nsxpqfR4E>M} zz;zE_X5h_o2?ek;|GJo<5eSx{NlTr$pJ9?9>3G4va`nAm>yuP(DYul~0kR zHfJB@;anW`_dSJ!;OFz(S59T0m2q$4`E(<7gnErSO1)40o%$#BDfK1w72!c$G*Qr3 zL#}}J5lvDT=LRMm4T=UNC5dW?rw78K3Ys^JNNkfO5zqSqM{Ukf*ie#2=^%oV5Sc&( z8#!}AO`8)1T&Mu%5Z5c1EOo&eU^HXmPFf@CED?oO%%#!fg7}F9$}VB%fCx+-s)kWK zG)X2O#i=o)2Gl_2&$M4#E4vOtwpB>|Bxz-yq#st5{-?!Q>L@(G*198G`hylksi z?Nj7RIhZ}X?~uAQPefLxcyR$w0~ljS=AUV)}eG5SO1d|eseqLIbM-1TxU zEtAXmIH%|vWy^KP3rg911?^WpQiR^t08XQjav&F~IC!Z+2b8I`BbAb30E8=xJgy#( zv42x$Op{HbHsNJ0nBEN``ms8qxjEnENpAGphYlatomjdb!WL&kQ`xTNtFvrvb%PDQ z!Yqd~w)SoGIeHuY<4?&@MaQs?LSEhMt8)4Cq#Mfe4(1yDqZ>vhLJ?kV@)lzb!ywOc z&@|(*bIQ$yYK>f(XE8`Q15`0`MnXf4TBDONN>FIZ&v%R*1;XX!VE}HK*mRAlM^*GZN`LxS7LC}Tp=s~i2@Nv2#zU{1ib`}XIQdz67W%>n10p53?ab~WbNn>tsHZds}vbw53O<>=-m>M_qWDs~HH zTzh)(KWA;Bv1KNl)nY4XP~wc{IYP$mdz=kVjZrLZ8@&>|)w9P{TVQPJTs3+~w|2~f zb;>=8z?@)!6oh(m$L6`@j`*Le;qX`uey~;3nhk|#c8*>(d9Wj|Q7AGeeM4961EUp7 z8FTBUiqTItq@OpP)sSx+HfxpWw?o9t7(|VuCQwtT+0;DhO6pFspA#$;T-Aj{WzJAq zLopE~)1ky5Dstj~g3&S2y~JaI$b|$QPf=x)78Epnq*OwXh9x4bIRpYa7MSS}o_5WE z)!|P_ZXqDTi2EW!U1GY82N%!@qU=yfNGE8wBy?;f4`&*6a62#?40*X+Bh%0@!os*| zNsDoVTGt4rv!o#xgn+e~EqXZvBmqTv;S4CRSIDdk18J*+wwBZ?FJl?iTQsK(x?DE1 zngO)OP~_)z@VT0+&-@IZNHsIZXFWdSue0)xp#oTiPTv*}Z`@Jt88!Ty8mU~$I6TbI z2L?~MZnVZ7kb|9lr`4$fPQ?<1Xbon63m|56D;NWKjpn2>gOiQH*=@$F~Vxs zSpv|}e>?!{|1Q6)CtR9JGRevH=e#T5>0Lf3Ma|naxn4qrOT+jvy259Y{ndc_VnKA# z)c>Xc*bb=Da1Wx0H*catFQL-1n;L33o&y$9>je*j4^h9P-l9Ijl-OCI0d7zTYA&+l z*Y6}zYof%~zv&oRLGG+Fo_tUy{=zWL7Ioxp)bf0vzI~=G-RIqy= zz2En$pjwwiNkO%)6!=L2$H|kV!Y86`9h>&OO!iZpg4AdPk$;JN52hUnUjjs5F(AE! zvJpm4EGqEq=kwwW;xr~Opfte-2?)MnL~;t#XUgEXs+P5t_}IFp65ThdwPjP2Z~#{= z2l}VHHTAiTU)9v7nxE{x`)x3!YFw~#O)ELB1v6SlHEn7k2PRxOzisK>q2zc=>R9{o zMSGjuS1h`<@CEeg(t;|dqI3L?F~=TUeynYNW%Dgd@p0(hrE^xaH}74vyuJC>Ma2H< zECq=#aHEL1$eYr}?&8DaXNSE@rsPAvt=Hy<`BRpR-gV!u(e&5XzZB?uUC;!J1zx&7 z`Q5Fzes>O2Bx85v##B7ev7vmRA|FviQcYup2%D&wYDvOmDp?DkPBo>P*wcP@s@75O zNY%Ri1wq(r$}_>glfT!XaQQlzB?e2 zCx#EB!DujhD(FGA)>+X^!jqaqyC((UQoWj`+)}@NNvl6 zR^A2V`@5fg_SsYw>hf1>PpH)=ApRp~ZM7ft1Z%ZVgX{3IS1#|>)&^1c)7n~5rh=pt z3-No)aJvVo0;-Pe)*3xDK{gH2n8J%fj~6pPl-MIVkHHl1L}DdAPs~Gjb)P3dJdfcV zp~KQX4_Ar+INR6REdhJ<2WpniW!WVH;E z8#X_3aO2kfzw?H{C96y8fxI=tYjGKz`w&5A?e|(B?7^Bd`ez|RnS%icMF|7t1Hv3q zh{u(nK0|HEVc<@4&PhSvv_e2(q7t8I@wxMP`T1-iB@%(3>|cz_$3Y+ zZkRIXW;qzY>)5efH~tZREaQh&qrZqB=%?+kZre6v<~BOJXYrEZ?TgW?2bPu>84UOu zl`AbC7A_P&=1qepuDoV;-?5#$j=ggudJY6ufOl~^>Y1@^+pF8R5w!8MV> zh*J`DAVCz@*f^%@O?0CMqKSCyD>#kJ3)}Jz-B2^N$W1fP=^!Wd4ZlW`JfbY-^@DGe z{^J;T-`~nop~Cmj3;f51_OPYcS7a%IyWiC-OscTI%G0Fq{u7j~-TpqBwAr76%EMPBf_D|%LupDifIOO`dql`u{(^jd|*IYIx^%=U!>7yBr-47Ol zc@Jn!Ci>ADbj>qLFvIO&puv=9jiZ;)&On>b;5C`#dU^<0@WPiP(ba}A<8PkSpi%+a zuF+J9eWX?@_Ia|e+i(sog7@IoB19zDpEA&J)RQqF%{UUl?MJ$YnW!*;6O%Vjp1gS@ z{quNek)I`m?`CX zY04@_DTGP(Byqi&6pxsmOXAXZPF}x$GMcnWw5yep={8DLU_QQe0I&AHJg|tf>`8mX zGV>X`S#a*%(a_T{GX}gj;}Ozea?>R861C*4G@- zhW-T8O%{g`xo3(k--|pwtyrawaCHlinyNY~P&b4|2Fu!9_TYU?{>(HYQztLlM zXS)^7Ef4Mk`Lm6@GxyC4;pdyO_@!Q1uE8m_&sNyK2phNMsG?S%)U#IQ1G+-<&|!sK zz~#=71{$lB*%K}h1_9BRE&e7vp@xZHHjd^nj~&9H1fTFQ6ne)3%!tj~?n1{vp#^;k z&fqY}XWmIY?M72w=qnc}go9mRp9|<*cJsh1dyk{KIEaWj&(GgPXKMwPM)$JG*_y&p8DY%xvJzCY}QIyR;rbx zo&}!+Ij4|uDzG5AP9|HIlr_Eex=jAsTQWQ{KmXxNh2qN}lx*MkD%JOWD)(nUYGvGy zpGjoM1Q(*sKXMBFk6^7{F&yQ6FIDj0gLipF7Lt5xG=2+C%T%hA4t|Eu zAI5e8fs~@M{0ThOkRAFeVEW%SNqDs_(u55s)(=!sOsnQjFo#fc;#avQa*2G9EjZ;<2+8&q=@BuQPKx z5AmlgC|eT|E)b+;WD{4y8O1$w4hnwzh&?+X)*(i+2TN=YDquvgzsIkQ516u010XTu zNsgGj$MC<9ful*$5V?wk4f@EKEMbp0!ubw!ugd~p9w<25P^VC9T#@@TaTmLwYe7L`ijHUhI!FC)hA$^^2PjE)Wk8#F5X zI08b260F_26PnnTsJ+w$S6D7>DN-}cW?_ph1H&A4G@>hHXet!F4=&~}=FBWy0N z*o2uY0D@tUr2?Jilz@@j!n5;b8VE;sU$L&^mPlA*ER;Z+b*&k+AK5LJhsV*Yb2_;I z9cCDS>zZ(Tq~^x$m?&;oIA&3)!r}mcI9h02<@gk44GmIt~kvezZgb zd?f|MH5&m|C$yapw>TY*{c20kZQ8#t$bU5|I2n5 z`P}r}VY68|i(i_7EJx380lvoG z7aGu~&9fOLje8d(QOs*WA2vSw{BLN6&*sg$o#Um9gyCe&?epdV9k9)xzmMY?8ed1b z54XwJ=#z|&%)s|A6?B1rYYSkGQuNb}DGh?`2z)v+atYYtufKB^7(D69mYjy+%{4_G z=(>r3U9qynU0Ut_Z7+DY#+>XJvC_`ZPyGp4fKu=281L3x?45F`$Zwo^be>qk3>Z;e z%J8eNz$E*qUb6Yo-qVd~(%(FGHR;K{X2~>oK2^jrpAE zv+>v8!AHQwbwIEX7PO$_d@M?wB*HWq4U&S%*M_TPQpf#DaA)DZzv0vwPz_%)+S_Eyj-?UB` zGhQS69XBN61n5y45|PzRS^;$>6d_(g3jj$m2r0kbIWdt#d`BMGL>Plj2ejajo8PcO z8#fqP-HaJJ)~J8hZWudO9}hylq=bjO;kV3A1yWP$1aT#Kx3F(~wr0{Fg%}A( zdI4z`wG90PWU}A1j?u|XU4V}ezke@ze<1G!a@j?`e}WoD@RNSin^hCrQ9!iciG`_P zzTz=)wBWZ05LI_#zKE$@OepYTS&|w0^^e~rwJD+sTKdEjQW^(r(!Z(k%c|9XyD%Ls zS83o?(4?wKpMO(};41|2mA?B9Um=LE1oCqyrUYv^s@O1^zH4o{32a!$+aH?4qWoq zduTWM>gBF`zZ?R>hkJiG*1K;#V3eV(*(1hwPM`4fU(zytPMp^ylpJ$Ydd!(x2{r%^ zbOAOIl7T>G!x{5#IyQi56rCaMRE)4BA`AUjH~~G19{>IC=_n3;haPPOTD*9DeKlxH z-Nn55d-OO^rS77m-o7`DdB(msysRC zbP4)u1AzWRUH}zq*IrX7R1-<5M=*>1mFQ()_G-vQy@r$r4alafZ_DNya&gaR6 zf`p?Vz=P=B>v1L!m}jD`kiiRgvC;G{9+%Mp^La(DTGB;VesMRWq0bBkkiGAVOC~D! zFPqXj41^v#04#Tc({J3f_R87X8f8OkqO~=aH=?d?=!nI2tM0yM&9&1e)wh(iH<#rO zud5&0v8ZPCeXy_KmDT${1@eF1b;;B5Q0~$@%5Oe$JNn{Ii3NSVdi!+4P<35HJl2@g z*wN9LbM1;%+ovw5t&f%s5)-zaZ+{?SZxXAT1mQo66Ce>RNrWU?DhnUI zAx@ta7ktaIW;_9NCIfu!m#Y7;7j3@(`HuTKoFgOy@x^>#j@0j>6WU8IGv@p9InlG8$3E~Z0(A*-Lpql>2xaE>8+2n zH_w{0aWG1u8UMKPXV4+iJwjhoVm>!awNsO*1=K3)O6n%!ZzJd@o)hqY%+zuC7}O@r z5{{@{6Dvk87EgrY33Ht0h#{ARsP33?7fb|0L~EOLOOlI^5qtrB89Y&@i-qETN{f%8 z?j^2}AXS7~q$^MZjA0njIOaSxczWL3=(c&~&b+!C-`CZp{x;HNFPk>4%*A*3SZVn@ zblcmdb-MR&tjk;dsapLncf;Yb&Z3fuB}JWOha24gQma4p)E}-GSCqFPuV`Gw;d+!) zS4xTpeP#1N7o(k4W;c!W`#N}6nW@YdBsVFodk1s@)z*{fMRWkYcyjC3lb{lGg36PR zU1WgFs+YWV&|4fSyC-jq66ze4C7wgz=0l#+Qpb$$h3H@2gKtUdfpSdVJ!KI%p*?3z zPW!~xI~w%g$mQSY8}0x{K)AnXohT$tYPq9P|FvBHwZ8F=78tCDiZMC&mgbat4!)JT zAI&=CDXDbKUf4auQCjK=dT_?QIb#$M-x{x-1&uuKcKakd(*p1gSF_@q9MhRreZi_ph)aweN8Rc zIeJuQG;o>IxnxXaj)vAX#w>JTR(^v|d!(UO&AKglQq3j9Ee;u)YEOVo1!i**S{ae8 zGIo3nmvtB{?!sj>fX4&zil7C)=TF1~{#bnE1sJaqsu9maM+6LPt+0o=fLcMkdicD= zzXDBGBoZJaL-3?7AhWPWt;Z{)A6bUpwwBFrzN?bS9=*`PSneHh_2I(4=kmwH zsgu2)38`DgKk{NIT-i0Q0!(3`IC2e22S2-b7G}cyxrm>U`g`WoIeo75t5y0#=X+ z4#q(u0VCU9K@qu;n4}O3aRD1ffSn}TyCSd<*<=>LkBMRhCPL`uCBrMD)v=%Qf!)aB zVWKt$n;OGagSCr$z`ysR?{2GYFq&D`Z;X~reKgt9l6>@ed@7Nvg4y!gNqhgg{5GIs z3_Xi|4a3nkWHEW5-LUSv-#xyuvU8X(r+sk&9@yXSRkHznXGWE-j!#pU%rS%wYJSc3 z6@T43aW7s6_33qxAT_5IWfKHigjjA%+(c`gjALL-Q&j|o(#H{aO|yvBly)g2DB9xQ zCOVcO`{@Eu3=vg`jTF-YwbY~nI`!epu0FhFOL0eK#OpRFK|)V6tz$!enNep{XaOd& zDuxW5|nhM~>yJ>Fv| z*P5!8SA*Qj`h+oF-qtj|y__A{pe|7YmIX`xupoDd#*k%nL%`fT$Pg&VVJwoVdK1q= z27vr9t+B-e;gA!W0ECcMJX=j0vKtr~h!+4pLw8kUI`eq}C)|T+tF>^Y)+pr{*O zJQ?61L;8a-I73{*Pf$e&vK-M~F^iycT7gnE!Ny2-Zhd`jHf@cD?fLokaP*5}F$Eqh z36Ydg3Hs3;x)+_i)9mxuimL4$veXdt;R~SkrH4V;F}Uc;Wr{0#1IPW0 zydx3~hoWeTBQM|X$j<{`U6^nmb2B=%x2>6`<%|xlfA4kRz85&|-27>(X4#*{KE5!p z?OWjbcH6e^MEnxTS==4ZV`22CoP|Si+|%r&h`yM#s$z=P`gujIVF{9qQ~bPxs2s;U%19f5Mz- z)_HdYnY*U%33$NDz`*;azCnN1JJmAYgu(%u_DPaH^!f*Y9-<#O}NGCH3wut&Th zi$u;iguFbP%MK-S0l&aUkUm8X@H;{@h#RQE znA$OVVu4?13VUL_(HA3U`og>m_sVcN;-(UGp&lr>*Gl8M_4M_eI3b}@StrgV(#dmS zSbO3`Uk}+K9RMO11UL?$cnDcTFH87SgCd#+dzUhfJ1@Rt&+mPVw;h7w-qXE)6 zvv4||omk8Xv2mt%%QMfQAD@9}&%|{&xMkf$Fb5L2Hxfj9AOv$JLW&f5W{c8vXbj03 zbI7C=tKpCZC!RM}15}Kn{GttP9J5TOsJNAkml`hP94{dl#QwsRkEJdfH>&Cz2*0Ts zHSV&@9$p8(sUC>~<3?701J^waE*nTHr5;{azEZ2!t}I{oFfPJrSC(D&@MUEywcNPN z=o16!Ca#}%)ZuSkO|?+ts2P}hpeSM6SJ>ed1QUrkFcX|Tjevk~j**KJT=j?>@WSSC zT5HyXm(GE)xY&1v`7@MOT@j?}BDPD32#scdgA7I11qbrv2CGVuqxWtYWu>1g_`Z?n zYsVAZRP;9j%PPRBK5=_3ALAR($dxMj1er{3lXuGBS6CFCa=FYdn;^^5s|DbbF7<K-!j}4CKp$084w|1zSKMPRxLLb1-CP z0|^P2;E7SNIl=OrDUt~B0XP-7fqNmkmHp)&5VLUStgmY>-}O}teT+VieYI-nBo3Cjq;4%G}^0bPvlf+D(p$Du&<5-GZhJQswu7fnt*?+8K|w8OLiO)Zd2A+!-~ zOd(ygecNL|1*(Da(6;ud?p&Fm9VP9-6a6~y1H6l(B^OKG5wvgEU=ODLiz?tMm3$5a zGvz8>Nz1U-@<5=xby!OY8hft9D11qL;eNSa8W+JJXz!GzalrcLC7vJ}5kX%jK@cTG z%%C6IjqMM?-k>dLLwG_y#aZCL2)wNr#WVRm7Ow9&fjRbVnD97eky2lLhz-r2JYTo;_z96;Tlf$M|wn2O-sAnL|t3fBrn4uh9Snd<}1^KsqJ zz;yvZ_HR9_l>Afh+h?T81+PQ{Q4lWT>(a$y>LxD0d&bQX7p!LSsMm|ucL`b$`=|XS z@PhLN7ci&S0HZDuH_>y~Ke`_O2S2Xs9KU}3_|A17*A72(&&Z1034tw~QUyI59QF>@{g{P2iBwR@(%Enomm}-b2j?>p~b$e z!sueq1fUe42bV+&v;0dA0sHKoff75E)9{HQvt|uRHEZl8q|IjF^>A-mPD}74aL*Fl ziRt(RvB5VcfDU*#B7WuRf{q?CcV?fh!Of(|#TZ=7r$o#!tSWp2blXPuda@ZB^YKbns?YJMo*kSw%50^}xO<}koBF;&HLLR#f#t8aNgb(9wxYZg zT`sj}gVyq}j1IzEXr~6f++YFb0=3HpnlFpU9D$-;lH=>q`>HIdY;umqs8q|FA8Xg}8fj+kZ8je}!+_S{Jt zxlf<^{i`8^yhS60m>?+(gPHf&OL(36gEGOsUzFn{&$E57Q$9?$5}!5r>j_kzPJnrg zo%bU&tguPw(HXe&ARRn0hC)P=pAsxJSPEgH>D&(!dBKvPBzc-ru&-m9uDktIvb`Hn zq|#YT-O-d#kLs7l3%|Zvx>p1eW@^v$dfY+gy)%NYDpQ-pRdXm6_h$ib!Hws(5tuGZ zk6NQ4;l<2K+KMJY^!)@NFaiI{=OxaF1@arOEkZhvDHt41t~ch-7fiNuo5J}%FXg!NTGNPtw*J3{bLG+ zZnyjy$Uqxpo{{fX-C)Sd%gZvXjo`msdX>C&+_+Y`O1}$erE{m}RafWj(ktbgckI|K zSK>sC?ACqzZk3UOPrvcT)1)BLf)ng!gni6`QmGnh7&VfbPR*y*;K6x;PdMtoJQHk4 z5!EgdADA`}>rOjB2YVom3zEZ#UIchuI3e*w4;vV}Xd*qVWljtJk23W$=6EbV3Q4cG zl$;hM=PW+P=83h*fAG3+Laz^uT{JP31m~pp@T{2CE5K5V{06#9NTaFK6e%YmN8%Ch zEX95$A-H;jgnba`@e!Cj0v{k4L6MEg3Lv<@5hf6#WFfkAGWbH638aN4N@O(BF;V)J z-ZU0@^Q=LZNkBGaJ!7=cGN0ZrV}qNv%zmhQR?MORG{X$Psi6JC#aDNB&d|e=K!J{% zob6FYLwKlUJ!rXhumZPj4(&)S~YpNC3?pI@|IgTOR^!;J};%aL=Ij zHG2WrQ538UjcGEOn-^`o6<$-ES6t8(*MQz+o$1F1eebfGo0BaiKMUPSijUA6*e;W2 z$rCFJ{n}>J(4_D{j+D&$fSpyu%{jq_SHZ%<}*f(6);A8OBE z7^9&`G!ZW;1m0X6iADV-{X%_z#O!0lxfsXd>5$j#4S9otGzCwy#gUkx+FEQjnv9%- z_>1>R0#PE#@^Yg0V|>+;Xv7JGlhGU{P)r#%y9VGp2T6uGA@2MN`{rI4lxD2nh00UqpUOeS7$GU<76S0&p7wwf?~!|P9*{bsX& zE76%G<;b2pV4zS5g40J_PHUD%?Y3xKE|1IUaUF0vbvEK?#G!e#P;IuF4N8;8<|T!BDN>wVpsL17T6dGqbgCUp4q}Cg~+)V!_v(n{q%B3=yKIC!oYQ0WxHtTt< z+TidUb-6TlXDH-!sJEDvPA4fQUGH>iN<$%sQ{6^1h9RLyAwx5e#Dpg#Pd$6!0AlVR zjhkvVX_nFRK^3SRIUOBC?@pf%@<9HY`RE1o!aP!9&TL$w?>J5C3@VjDqf((VNXuD3 zT0zC;1ua%RZyB5A76Vqlm7JV_5uO5y?L(Aq$ur=G7>)BR7K3){Fu#8o`876Z4dLpr z!Qz!bMy^p<)E0w>1a)e&&Z4$*rYd`Ow!JE{J?zd3@g|K&nH9qITYQXz!4IfwbF zZXbFP-HQweNj$b--vje@&6~Fi!0QHgjvu`J?Wa~OUAp2au(f?|OLghgIvMb^CVrMC zT3Zv`&xuy}Q`BR7-|kkG%v{nu2|X5!jt8y(3g;Q*dbQSQ&kH2NzHF^ZqBI%odEwfs z?AAbCq^Kd-YM8lWX6i|(36I;c;hLf#e39IAo)nBZaRS{ZEA1?8E<=x9qiriJL62>L z{xizbwzg8{dweA1xW50}K}?aWF(2x{^mq_+qr<5Q)KThhcm`*I4ER9}m_|{2Gz1c4 zGRE^-z#KD|km)xP5KllnvC$B5>dyH>MqkLs`FOm_Ma>CdP&3{jo)AMECiKk-T+Qgy zMUCRc`i;1BcwsaPb3G>e6A`i(m^ea$q*sW{;LxORazRK5@u;*nDbG_@JdYbxm&W z%cgtV#BR7U>Utz$MlZTc-!V6S7LTAi!PrE}F=K`ML8+91x-$1Ym8pD-$*Qljcn8(p zTvU!ew;FA_I)Is0v%abJree&O{PnN9Z@dwGSr31jwQil)TO9G0gg376`-+QwUs-A| zyUb$^)TD}e@`1>mWtQtujE1{DXvgw9T&89%NKVQ%FEH^6&2%E zv!*lBu@=i2b66(xI^+2s<8+{LfqN`C?s3IrK8;DvO#>R>OkIlaT8i%q??vALP3qDy zKe1?IYZcwCO8E}^zi`=|%0!_*(r-l)?1M7T@)IKmMS#D{_D0_X@wO9!65uyq$spF?VB+!0C$w906K~nN=NB=uI{Ym=g6n{Ur7DJ+0L}Jgfs!Ns9sMfl{wE(PO58ST;#f z)Aq(8GY6GBD)o$N5D%W0vaJekULLC(#!5r^phJbD)LF2uwR)dHxJZYR`Q=4ygUChj zdO$AnfvQ;{6s_mssiABRo=KpB5Bs?#=h4;61I1a6K-9A`#|7pq7~{SEh!Edi5#!Mu ziJZSgDyQMpzX4Vv_kBx0{I&ZMSp?GDXB8@9<$!*C<9MiB8fy#eNo@&&kB~;>l->+3ySI*Lhd4Ghg(0S zYeZ2LGh1C7^aZ-=yx`ER!YpMDxKg9aDwNAN?Xs0>3wP~;m*j^B*T$rqclonMMypU> zL483%J^gS|WOCP{n#8=B722}Fxdt=)Gd!P5S~V!(lbvvlnf7T#omFL0+dSP_!BA6q zokeZdx~=-f*@0}}TeQ`(z9Ys}yB}h#Nfw{_^4KvXaum)Eet< zMQI&)k=(fueZIJ+cJq>CWges8 zW0|Znz(in52pU_Q_@}C7h#QH_<`Z7L%tX~*VygPGr3BUPdUq!PlvZ0YI%_r)l>+(C z56kV+Q8@54AL$rZ75eNsX=!_@bnSC7a0kwT2hrYFOIqgb+Bxr`tkD%(?aOLuyci{rJXL)lb-f-WySMLF=gEtWUdIPWDFbT}Z1w?zcbMIlobVM8373zQZs0^fC zGipKq+a)|fI-w`l1HbxWjQA=;Q$NuQa~|I^>88#irZ@AVJK+xpsuop&hEc!zq7SEE z4tx%O9=EJ!+JY!bqFV9AH#`HhQ_)`Lp03~e;{6!MY_ea@l^~i!#CM@Eh3Z7Kr(cT$ z4;~sG3CCvq3W@{7m+=9S5chH1#M29;E)LT)Fq}F8dW$$YdO^<7i}dO)(Sd^?a0Ia? zO&O>8FI-+#M(>3EZt8fMuK~ zXgU&I1OhokiI6U|lTc3Hs)5>48L=AtPdX^fx}i%~mA#3+1lrfVBWHJ%YL{y_4Y}r# zC$~3VBa^I<$oqaxM+F>R7-`GJKP47n%7)2Ou}&zCxkDuV54~zr%z*7rWS1mX&wR`oJS9FUG zPK!bi^F->${qDhAf&7-iwS1{WsbCeUn=O`*4ah=O%iA#ZKQYrp*U6xwSgBOWMs|`* zf>Pi(x*Cn^*V_{I^?YPck1}bAO^`tYh&-Qo1Ytuw@rs!i+7o{lG7thrN#l{pAJ37? z|0uV~=ceuo#9lv3)g}XQ!dx+J&PS8_UV^o~sa^?n1pPGWqd7S7k8+`GvKCOU$Aq#% z+MJIkpRN_k_NMj7kRXT5PW$NKsLWnFhzpJzOq7pk+7eylL^UHB-ZVEK9ojN=)w;(g z!gUpWPlvXS1PuD&FKeD#TFy0=R%^1=*1G0db0pNHrkZi7tJh38ygoS!HpI{T*s{Ph z_)qBjNq4-loQ;IMf%-`me$9FE(ENThJprLQB4B8W5SK72#31Q5f|trPV6hAGMxui$ zV#jgj967v#75T}E@r z;>&e8g6*ARrdNpMr_1CQwELYVQ<#+bWfdV8*XeGrC4Ldaf3@x1XQ&~iv0=Q!>)?Z( z@IOY9M5yDiTkIyambcm*POFvIs!ce-A*2c+P}?i!I&5O@1qE$ZyQ#Om8}y>u%&(i) zwvHSYbLLsH+~vU=TmEB29P@&_iY0Wo$4I{Wi|=p(wHkFosZ1fUOh}*hx5QD*SgMOqk_5My5p{+o zA>v)RAGAcY5y5L06xE@L6BH3`TOxqE5-F$817<>IIbH`pcdu(|{PPwh?$`MP0H63He zHJ2*rhZePsE&@uEi`igvn4626=vs--nQd3eCw#Nx_ksA7_VvRrcZ`@jF1+Z`uAZ-^ z)Wr69{b0{+0PL9i+U|+L>S;4BU%Dgy>eTj}$}G1zzhZ8aR(HvMhBoIY?D_2UVk0ot zpSKo_6=e2A_b^nF*}n3bFex1p@kk5;@-1HYOoHMnOWMe66zBd#KXkD$%(>`AaO(Gb z=JSVT3@rA?b-=(+3duc#qU~#;cIpggIARAQE2cJ?%R+;OCr8eFVjj&*dT`;>lMIT= zoF(Iz?%6-5`_clb&y?*?l(yu|-!tbtKL#fssF$k(4yaN9~_rE4NKcOZPz%b zRO86DvE@zI74Dq1Vn}iKQ!~JVCl+5~w=8TQ^5C+$_sm~moKilatTAN28h&!V!2_L^ z@roFtQR;lpyMD5rz+^wR*QU#%ar zzWw)^)qij1(ev&IQ2Npt8shr%9!8k|iHZk45$j6}rj7_I7yiyQL=+;?lCcqrVlp3i zIFp$XK>3O7f#460&<$C53dtfq$`T>6jFNtXQwYx{xTlTc(H}~O2;f>Y0#Bot!#>NA zx*?m79NE0|;X9w!mx09~3uR58Yh>9Yn=7jx)W}U5qfh_fq$5BID$yyl9i1B9REPHI zJujL2?m3K30q*dUnO6#`l^_Wo8~vfE80j$p#e|uML9!|9jQa@s`N;KOjjp*7Bsb6A z`67@Wv7kP4iCWUL?x6+jm$tN)vGxHhwFeA!tokLikxo@7?#|~kG zE+*&-{?lPdB@GUT0VWOLASs-p@F8iPEqesm!5CnFL^jt96a(bHPzjP|r_+p*u7U!1 zN!Z~CJ5m!;cO_%PhQ*TN5l-k{1YT}iURk-k4VBLl)`cr@-}@P_3k3vQfD(ti@a-@U zE#g>3Jp=_xFeC7Yf-H}TA(Amb7z0s>68C|SIDb?Cf#CEL=pa0ouun$(sd|4T;)l=q zfz;fWL&Eem!nWF`=M5?XLhO@vou zU6Igfkycz+Lab5z;zoswNkjzrBoUGvj}s$K4u&MYwCgoY%(nLudifI0jKD=bvUBNPRjf)O=l{r52=007PrgGJ=BHl23_GYizoTUnu)jJK* z+pHC*ZvFc$d+>KEMSoZtP%3j9$Byf8YB`Hm!#EnNvTDZ%Xy!_p)B{JvJMQ(ANLx#l z&WD`2@g<`tJ62aYv+wL^+w{ByN(!z|E^3pnu%_kTNda?+Jyzm8ye-9Jm$s%Cy)quw|EUkM>eecFQ4nKX(jrXWtXRD%RHF8@# zGzI?osQR8v`WsAjgrvtp#R;&`oiEWi;F#2{scT2GR-Gi@<;s`n&5}H@74UG{Sk|Ir z3tYWFQ&4-`XdWMB+FRXuEra0DT?O3T3|T?m3erAr`acTTcET=Ds_y zi6i@eXNy+77h9HP$+9F@xyX`igJs#6Vr;;eX1eL7n@)g$=p;ZwPk=zU5K;&!dY-#w-%u2RwxZHj3`~Bkw*6!@=?Ci|!%$qlF-upaI z6WM{D(kdBY5lRFpuAIJ3MICZ4hPU2> zqe)9idMC+ZL5CD*tn_WHwpgmy`6>+o#JW#NvKahEOVT97-3JWxpei4{=Bq-%w2D){ zs?}SXI?gw3+0w)oG;N`uTZnVP2iWebEH19}wHu9JFb|rnN z>*+0tz6)tIHDfJ8dkV1Q|B{>R3U|Ygc3%Yn_zD~VUjYHIhMskNX(Y7t`0=Go>(b-k zb=n=d2XX%tD5D?hia(CKgQ*jbaS%0vnnX2IbE$>Ya#Nd_@&<}LQI7%0zZFWEY39u77f}@L$ zsA3L)?f?>N3TWIS9@tGzlqZG()`D$nzZ%@7#dm*ivhgqLk|S=g5gxxA z9tX|Z?8sO^pI5!|vO-Ni0$068XTxvRx%88O4QZ^#2)tAQmZ>Y@2rx(-Y2m;~xRpht zWLF5jd+7AhM_3?!%(@?BefAl9_LPWOrjG8u2>*z_XJ&Ne7VvfU2;lr-0|SiWOPmPGhk8#Rf!?e~VsM;Fl=FeOt7ufWi<8O-lb zKe74XTrluGLwzMT>o%AQPmdmT9!xrWXXTg$(bI6{fH7blUDnYXOr`Zp$IVy{gYaXe zzNm7z=`5(7ckhNLW3)j`vHu{tznGHi1TQ~iha?B+{D{r=du>>`lZnSOc%h3J8NoRn zPrO5!{3d?d!S$=poc?0Zo-a1sZKkT{p)2EIsT=o8v_m7=;hh5$wE*-mP&)8D-+L~FjIvy&mWTJz&Zyy|C za&jGW=A<)Q*?SIFMTU8crqAXCKKdA%o5yzATa5dk%b{<&?gCg%Kw2TR#R|A9R{eOr zl^o!gR{b;_MhAH1)?seTcMo-BJoMe_nbO}Zm_9fUWWTyMvRk?N#4-94gVkz?I&eZ- zhmX-+lMc;x~%Y-3xxx=lMVHj_j=}v42cqZAt1zP$byS z2!7fO#8aD{_-f0e3Mn5|N|jTUR9~tF(dD6tGLNRlBkDYZnoZ587E#Nnm54%bL=<{E zqS1S){nRn)A{r4`^y4H)pWT41*GxTs0TZA2!!C&ue*oix{mKvD_ZkBKt&9Q|&Kog)MWkAKq7!fTs<;DFA zEJEXNJHdO%?y-iwm2qCojVxv~Cf?t6_;4Eo54YWae;a74$h&qauc9IkJeeD!e+uP- zC-W-67JTn8PS~>GFk908N^V6(E?13@zxfS1#`w@oM87Vh^B6?ExH#Mq-?cwa1kD&9 zkQKZ{P>B#pG0g#=u*nfuWfvasbNc|h=Yx+9k2tVmVe^cI%kLd_;J4@RpL%HoXS0Zv zhThZQ&ucb*z8R#PTYmBI&W)RnjhVi2?L_MgjXq8D$NS4>mluguhU8vPO*jSFQs%|? z-q>~M{lK{88#XQ<7kGaEp_gjQ*;JiDndEDnv-rbJXMuXu)`uV2I%?&#iD9QzuN|zv z|GYETX;A4>`qXs1=1f(^cvP}zj}RwyK@ec#G8HR}m*FgS(2J!O#D^~lM86hv$OTpMcWucX-vORWV(!IBB9z%> zbkZl^6T~L!WR;BN0ejNyV!G#o1JOjqa;6nhNls=3pPD397hsG&v(j75G657+Xw!^N z-qnR`kLxYy;|~*hn<}nGPduQRfUzh5{?j^hl&e^`8@+ZnVls7r!qC`MboYN;Yuzs3 z#5dr_yL2e$8@6t>KXXAg{1 zU@y8r&xaSlRWLr-6#W;1BeCFb1~4b}$-*m9#n%(w1o>AvLW8 zVXd7F+Zif4gWeyBFf8%65&4GRPXZu39a7qSO@z|xSxS?yr73L3i7Lr|kLIEp>K?@D zQydn{^KJq~{p*K-U>y5T56;9y8U}BhYrNRar~yNOVjm5RrYrTodL=M8IUk;8cpdu4 z;W5L8Y5m$^!%+C29&n;xyFaWwFCkUv1C8E#GAwKZg-=@bnh$h|IsNMEKnP$HABg&k zkfH9M{eI={ZTN0OgHG2F0!~n7E|->p9Bdp8FP2Hm&G1e5u@>EI_|;5UvjDjnAAelj zmrEaNDMi_Js3mnO0Afxc(__9M1vico?0_0;XE7)s77U|1#~u@KdoiIEh%LrvF%}V! z7C?Ypjl7q)GIXe^2{%Nz2~adG9ocUZZ{a8P8!07vx-#^~$T@{fqctfqJUXdDCYLFs zI!}heq}9k2oSc!7RN#SKw?+2dwo8)g8R{GJp^<+515MuyTds9Z?>W|7TSi~a2e0!f zA2w8s&Q^oga0r`7g~D_ZON(_htrOF%R>JT+YZsfvdS1@5$&U2ojLjN+=}PXO@&^2X|yUgF$EZj$n3aN#@WYpWD|QxjVLR5Jj}C z4son4*xE%&W2*`m*(f0*P)CB`+tq0kZlz6jFP4M`$X+|{?lGYRV%1G}uL*Im0lVNL zorv2rf&V5MyErPZUib2h-+Zr@4;j+GX`VCX2GzGy3|?24wDMVE4i+A~X-aM?O)VPn zsnx}?uB514-*2HVWg5QuUyIi7xci-J7ZyEbf^RzXTFvhK+zqe1!i9nOmF_Zk@b?*~ zw$$;mFOSTBtN-l!FW05GcXjYlM5K2$}DXvGpBKE zuDSp6#Z@ruGKT~cC)9eiJ`ncRHW6P}71PSo(#oe*6b|t_`~(b3w;g@| z6d?F=(V2_@&3PD@R>aHDjDU9&>@kc;+7x840G$GboRnpvJGI5y=nhT|78o5|zt=?R zMnk%2SBaK(&wzK&7dv!$vbDbxIdapv#c=ct*cMznzdj?Qe*W5E8>A_bgkhtPXtneh zTAN}3$P|sjC*H2c18CxXmepq9y(08u!|?Luwl2^ZA-L~vYvr=7pKm-4 zvY&`hLXX3HKTPW<@I};@5|Rq)M6CJ=pgp+h>s>0{F8F7yu$zOQO56vwYW5ra1 zP!e7gFEkU}c@j0MfY?A@D+DjY%O`gps}SileGTH=*6&(##i`{Qov0%EU{@vB-wl9& zc^J3yhJ;5+a6=O4|H;F^FrewAIz>Ng-MU%&6!poDD+yI1{ejFiRn$Pd=Nwabk5>bO z$Nh`?;V$B*FcEO#@g1)eOJSS&_}5r{tNQKz+d8=#*xp@wrIEU^NvVx)PWU#cv!Jg- zy3D2Xx21RXp(e`)Jzd!NL*y%1sW`q(|{rrM)N0OOGHq<_HX+VC<&8gBCf@Y?Nj$kQ1X zEi&lfAENK92Xof1hkM{JrN_Q#d$?3+a>S6csv$#EFalzU4JMVRrAFrr3Z2#e`8Y1%Xp}t**kD27h|~19-I0lJmRk#gaR}*u3=P(WL(*rt6jd+%6IcDfWSn&|f6{ z=`jW<-}Qa688sx+iW(3_z@JbA+mzVXCjJn94o1wWADt4-IQr?b&41pj62@RCG1b6{ zl0_&E9?`p!+aD%}Mj$91xqKJA9^nxegkmgdAHdTn2DPCmwy!Y|wc$9b`B&Ny z^_hQ*FcEhnLQ|5yM_9dpOO1P9XP;A}E*I|6gf{q(XFq#s$<~|3?7{1|o05UzrM8!L zJ@IyIR8nCK6@aREIJW{E3UdKCgbbO=?C7CEJH|pI--`5aLf<{3r7)eS;s_^BRwcm~KY1Abd6!PL>+4Mif%XZt@Y#-y6P|fnr+Zt-XxuS!qa)mX9zrWR zKFqF;*M*><3#CpVmm&)5@d@0P(d6~TH$m-jFsk^s;pggf@FPizBu^@R5q=b-@&BZZ z!1bb3nuij1gu1Fk&qWo69|<>J6sRDYhn@i0o$Vt;z9_sU^8HQoD)}~8J|ysvoj`CD zUJ)Rcx04OP>>?=%dO_^tNBM--B@ANpKB5yo70*<$UJ`w`$2$>$4YL?e7=yRRm{F>; zJ7X;`3SRHzBR6;TR&)Xhb0+QUibp3Z0f#Lk!Pln78^DUM-T+Z0!~nxyO($^NV~(OC z2fXbq>sR^JD=HRkIeO+y)Q;o0aFL_^xTA<3_U)dM67YM;kzJ2{8+{zz80jdYV(;QG zeXGMeVR&7@8i~`;CXNl010GkWDwjQQ-!-+R%90uy+u7;&2 zW>jxVm1fAS#_S@eQliQk!`qtc%c~p5gaQ*P3R4sxKXnHFJvlYmYNS=(Avs3ou{o#i zYA)Ugk2Jk-eC?o6iFl$?f|B2IcJZQNI2jJ2|P*sh_$s`g;Tu%eO8OJ?Rjei}yK z%55mfkyyqss)pHf<8tX0sO>hP^+XUOmQVsR3DG?#>+FEwj?7535doEh46RpbqecJ z<6oG7(%egKu(o)J7E(rSSYSv~UB}LSM}ozjgDqz$n@f#x1wo93P0%8V&ja?j_6Tus zZiow$IB$FfgEdmIXS|8<_0KUnKOF*13Y|^?kLVPw3LQLxFF+Hyh}!Ck0aZN%i-vfE z&EIcYxlTXio~Q2_qStL0@mX;l9gYF~!~1W3TF5urT3q)-(Ve&XrY)H|u}`L^9R1TY z)fLBeqWOQ2`gy653H8H0Q3V9F3;_$!S6o4c7)DzqG97%x{gvYh+(KeSjW$wE!hChr z^V#bX$rg!1DY<@KqEw(D4)lnL8lH7JhZ#)WDtrJ8JfPQEQY~g@XMLle{qsz^VxD#S zea>M_SLIi%(1=nzcE2-0FIG#L3H>6hlAxy_`-JhXXYbUc0h9>M?>DG+M97H{hz{+$ zuy5Z5Zsh0pM?>fmBcX)=Ci4XA3>xv>eWCk5N8xZ6mM*4aMxy1ycnx;mZm>&mUw7Mm zUWTZ==+Laz+6sRNfEqXr9z_4AftmpPp|urIpbuC9`ao*VB@qQft>M;4D}zs}WHp)fb=XKz!Mc z#EBEi8PWQeH%7wiUf|wQWoD}0;a*tBgg3t2-b#Enf%6#NsS|H5;oUicG~(9prxV^! z{mZg^A^0o}McWuCxHJu6E0kLnOK|lHUdP3XCSJt%YVJgIXesf(Vj-9}8Ztq|+<9Xm ziP0pXu@8B-6VKHWAVkt5l9M!Qm~Tkc>y%b-g9*{b=%3lymI4#(PbWujj z`092|PfYc8st1xfdtA_dOQMF~5Q!h;Zp7@A^QmfT5ETI;pam(wiRgT9&>sv16Tlp> z4Ez^(9b5)i0i+e^^I@bk7r{w0a#-4pJu$moq5ugKr)DA{4OT$#8-X{SkAdsBW80a< zF0|C*gR~U@BjTNnLXNDHIH|_i?Raq!I~EJ;Tazy~?cu#p#Kz&NE(oyr$6Xxo#GXT| zKE0JOVSptUPcW7|tUCk4ECswl23vQT1d%G>4Oj~ml^7@T27#5_AtGWz7+KJz1SaA05QSa*6k-yL1a8WK%4A}Ri+T}x#$hOO;%f1Jp8%JK zeL$kDIKO}ms~3t1J{7yP$vzr1q@YR_^DbSo575I>jK)&MsPw#nn+r1Y+ZQTE3PBJ3 zHpp_Mr2AdP7OrJTeM?K*l)tS?nScAzq4ZB;9S_Ea{RNH2=+NlzOrr`%z6@wiCl)0u zQ+SEYl4@0$EDp0)FXMfUGKoYrm`-a(9$faN@c1B!37qZL975qK)JsjXewhE zn&r8a!h)jA75U}Uciy4TF182d^f2I?+GTk#L@aOgNqL~xnjIFC(r!+XNyQe03H~f;u(Bx@y=|}~S<%O;;FuDxYM@n_ zEi)L^*6XiX8zgp}B_%VpT9NExUUgQfO3N@(uJ7xNa|19vbOIO-+8ID=s#N9@ zZyLw)Qd%V8vfWY?4w37?mnpDM_Q%^7sDhO}dF| zT%PUft6`)gz5aDu)lOcLtTR?|tk;kbZcM3^C>(arT#g%&o)BiMRN}l8M^TPRH*n_6 zJu^R=o7bmzjVN<&`xRN5NmH_*A5G_HCnskW(9FSMMs1o*Dlw*}N~B7?GF2?Mpiic% zp{0F&uAHD<yL>9Tk zqSh)TQj66fW}Zw`SmwNg{LYCenFa`bG*?b@!>@?!n^-ZZ`b*y1I}jxAXXU8p0bEJcG##ti8565H5_ znq5DE2f=N*0tCZ<)kOfQZ)WOfrRRSfBK> z2E*<`hmm0nmfm5I@2_&%!JsbgbM)%N@x{Lm!w=p?SN_vl)0 zrb)?3O}6}!0Yj(FsXR2syLjUCq4mAJX=;X6TZ_E|dkqf^jq4o5{BorcRM1*#2KMGc zb@x<+5goh1H0z2GD}wlTG|zikvRLFh#R*vXhPJWVxXrW9An4o)AlHcNk6*cLqMlfY zY!-Y1zW3RN4WEHx&;W{YC_49Mr00cdwN0%CD`(X@QpplO)iG4CY>t~se?X$wzqFp5 z&%rC_m?oDw5{?6^bFCXbgYWft+wX3H3mqM-hWK4=>QJrEQKngl9^e7@K4n?=t`g#;0+SI*_!1jMp9tJIK z|9>hEjX2W(v+~fLgOybeR74!UV zV&@X~AM4(h>XS|;7syV*Gdi*&RNw&8I;}O)&|Z{OAr7g00~&2!%rM$CeiOV<-ed;V^7P zXLU;pP=~m18*B<(&q8E{zVq6%ah@`!HEh&G+I$9i9g+#!8$$@`*njDjaV4&pdfZ`8|Em0v3jvcMTCAG!Wp92 z2uj6-v2)ZY>cKZqdh82Wc#5S!+&^wR7W$(I!RG@GMJdvQ!Zhwh_yJ15&OsGJbxP}$ z5qV=iEJk&&Rrk7S9Pt{0#9BHGUZ=gQs@Qw59sN*0^Vwrrq1CugLh6cZg8qb}Ggx$l zHJ(tdqg1#ZMRMrZfo`BG2!1JWMEntkz!(e9;vY@UFyM}FU5HF}+-rH3iZo#W6fTrmLR=Js+f_v`6g2=FY!YHiG9yhT0~%1I zib}M#5fQ)26m|kv0sPLm^aImw>~OK0rO@(gsqz=)@F!sFKpndToXNDjU}?&XQ1Mp- z>Y5a#IK-e10c@Ei%n@|22_?#m6$1BDQ38He68ff<)NpDlvAXO8B=mQNjb0;1oTZ>K zX~5tRHm48ceHWAUB6fG>B9_bnV!GxNJZ@t@q#FCprcV6*X(q9B|9+|1q_CP8`PQwB z4467*ep%ON&TYOeS=nF!{mztWb5^XFGi^#iv&FLJ`N_Gtlb>HRjj0(~RT^rjLhK|g z1%DYhu{%Ujaj}!5x6#~_Md>V93)nVL4BsoO>D8iA17KfJ%!?<#G+E4hTjVO57G>5q zEpDpM6tQ>t`*Mu9k0(&Ypmlc*>j2_2-A0 z9)KUd^cej3__RmAV?^C?u$XSV8saUv9<==?{Ah!t%Ye;DaQnKjslqx%M=O?YvLS^o zJfW(Cka`wP2WafX?;SZ3k8HxpV$tlNuEY~S@W_$)op3BJ=I>REX*bqo^-<;22x=~t z#b7BN#*x=_%6~hhzG(T~c|lOd<4M@KOiS2tA&Q0mB9oQndPay^5$&X|V+u-vXO$J1 zG~vS9$?QfqWmYJmfy`ikF-%@H*#Q1Rwht?+^7E_m*&XBW+Pz`-UE}*LoZ8H4>$Gh1 z)P?;zs9VLdA?$r28e+mI%l4nU;E6aHdMOE&_U~Ux0_uF6ePmM2;wrnnYH^Kh+xySG z#M|xsOV7Q(O?J!JL>XruH3;=uHO(8fag~QI7hGy>z(s2kHu1@A5M+FIG^R~fY;mV# z40hDD-5!*L3tv2PVev5Vt(wR&;e8tAExG?O1^JmS1 z^I=By3lO3B* z({2Z<-@mL@TZED@KS-(;8IjO;T`r8v-s?Xr zJA-<=1C4`!r|2V?kt0g|&(HXJ#`FGvzvSnhembJu{&sfu+uOVMr~d!D{v_h^*&Mi4 z9M+YIKa`+5L7`cE7Wyt^w>RceUE>x4sMIFBPef=uDtbWYj{%MeY2ArIcMcg`MaGG?PAv8eV8gY(@c4p0RUSCZdIF!@@*VJ!y87;8^o;sgl!5xb9h{p zt!iA=0awUZi&b$$^i%16zK*LB;%(1tS(K(TP1!#49&w%W_My@G-g7fx*t>7m;G*qQ zOu95KT;++j&}wWR8vXGGb=F(!%SnfnH#Z&ZwWWZch~4Oq@dWe^&+Glm+3iy_qHQyw zGBXFx8PXicr>W|Zv-YKfr>AUZ%j5e%f)20?&7uRT$=HuEhu2qvm?dBrRK`1zrn#89 z63>Yk%zp~-MR-GobQzu_7`-?u2pDG^mYOrfFh>G-dy*k{1si`p=DVUCc!_Bw7W8mz z;mM;FreF;RJ7(?MH)}!ez_I&gdGhGRXaMhN?(Ty}tr=AwvmP`QR)7!=!A~vP z9JRWlNUsG=){JkXOOuSg+B_$%jFJ^8ZMy22Kc}Gv49oGOCFpxwGH|<>7WehI;5*^% zg+9)@q_0c5@4`NfWqtjueVV`Sn-!hfxYaPiM8DO4pfX_hR7np=>x*tsD6l~xHXEGA zqLAc>GQeoAiEDkCRmwA=+F7-;-mJ)(9-(w2WPNk#`+T*l?S=4?C)m$({(Qe&@lap( z0L}K!zDL%B83Z2>^(4^g#IGDUJDC;y5!^x;Xo^wSA}klin8o0R273%O$!jNC6|q$T z9@emk55x5>@QdiD^(~Js0}p0L8>a3SSGLrPTE|C!>kdUK z%`Qf*k$TgZP^1-w#RKx_@Yu`}E+j2VgMF(eps`%2R)F%PRIF5Pc8REx!pPt5KLZb8 zk1r?hZmG8|do;Xx%8(hh`j+dhV9KF2jH1|OwmCfdG?&d~&Q<1?m1L?^t*OolRW`GW zKdkViyg>w50wx~j?TV5oA!MlTQ(@j%wi}_XKHS0$WTc;m3L%(j==#9#8 z%lVbkfUzLGFnQ*_(jv%Jk0^ANOCDUaQ&R3K2r(PXQzSuGeigHrXT?*+#di9+>~zpk zQd^9M>e$8V92m@{K2d=Q)%I%Cl&>7C<~ z9FXF3)K-~n&&*(p3vTd=!UeAANP3K`pekRbh<*a@b$Y8jN;yooEVjb=wk$JPnbW7Z z#{Bi4SReoVa)XcGC#M*2d`6S^NH~**B|xy+wlvRf?hSl9%iO<-q=d zqIyJ|s-84D4Q8=ogS5(nqK`;I9hKs1({n1`L{zCZbVgZ~>8oWexqW3LblWupvVB9v zx&6+c_w);T;H5(Q>RKOjo2laH$qD1&<0I$nL%b5bIL|X{-`Ih<3os#u9b8Qy!+P{! zMImU=n>|&V)#@Cr1%8Ud8CKAw)fZKO8OEgO(!TROS7{TbyU{SMbmrBz|HYpJhSfBT zh3~jLeTz%+te3F`zUQm$#DU?TVJRw^@Q;RDYwi>oIh~Owv2Gd0^-4!4;@HRS^63QN zP#xKn)(My}qjd`Sp;ob3p@V-^=(I{ES)pTC)WInq`TjE-Fmg(I)!HBTWOK4YZwxpV3F?Bhe;w4cegX zG_W_pFx`fQocIPwhNIJPqF6Hg*yl|kOm&kR;diTXfV=ddwK<0+H`KNv=jRDn0q zqyLSvJB6}C4>p49x9F5uR((Z6aT%zbI?59Bve}m!hI(kYyH|ktt|}K(FY^;8!o*h! zNrkC?Ml9qN)a;dj0I&fJ%~fQj4aGq^uF0#jD~WnKmIh*t4zx5U@Wr%`sLj}k^K*J@ zz~v4E+^zt-E-*L{7#wjgII;l!v1=F94_Ub2NTl!4MT?I<`1MhC-OJ;k5(vB*9!TcQ3f_i#Bj4og%zGK;yUjC*XH3SO7>FTFHx#0`&X(D9i+_foj#o z_KT}n+5CB94_sKX=>2;qM0p&IJ_C9!%X-&%?|JDycx`{nl#-Rk+niGt><8leUb+Xx zPhHT0`ponj6nlWsMIF``CSZ-|V9<9d=Kw3f9?5xAO!*zHK4Z$|0jzc8VFW!SD~o6; zRxGjtrZ?OIe*sdk97y557uK(TVLixIu!_t)_o6d3KxVbd(?+KCIRk%A8;OExKsMmr zh3>pelth|Q5VCXnssSyfV;^$5?4g1TdI^xe{0hqHmsef}2iK1uw|@P&@zIA<@-njQ z$u))nBo~F%T73ro-HHMuaejuHWP4UdUW(qT)S6kP!)){>C!4iOYXW{4Px+}J(N>M` z+IxVASJLUOd=kQ%M<%Q!gq>ue85LckqrW(x#{4g>cG*N~qwOZ~@%`gBj32)Nc%>P= z(xk3c>z1aZr1i>>8Z-M0yW4wLq0uNYmK#qk9E6S%qw!Sn_Thap`@aVN{@QCmPOnIW zI%OcvX?*k-eG-=}PRh*CYLmGneO|9zpR)L_f>;KN>Vzy`D^~h)djTzwzlL)I-*(40 z6=V=Epn7Wszjb(#Lo}fgIfywg@8rlOppz99rB;sF@)bP&l!G3+Vptp~Y%5xIHiJBctxaRM$}&^zLJ@ z&#}#`NUEL)LKk=If(z{z6<_h-MP>h9X7C;WTZ7S`>@(=+3!^tS0su}k`ge*JjpSV7 zBHB{s=oQ&9wHzGGc7rc{ed!{QPkTK5{#yOv-asMEXNUkOq=QAUpFIjS%yn0x5+JIQ z%Wm%o)h6I+OQ|GkA>wLxB~U!P@>H@s2(nH+kFl{)`=eTtRY4lrZpDB&1Tq`ZE3#fv zVLm^AF$vK{KJn~_Io*7+E)Ws-ZC30L7!BnLG%y7XkHi_f+ibu*Yfm=2(u+{G6C_JE zZJo%#qx|v>+a}O=HZzuFR?%zVC+pRSArJxefPrs44w7^VG)U+Lhtv8>Wn8s#E^SX? z70G)2ptcPvT7lB3`d7U7q+2d?&flL_B9*bF$`NZmgqPq;@Y08C)_e#uK|hfB;b*s) zVCeN`7cP!{7~NMqch$PFqUbC9yp`+6_I~>~tyL+c=`DwBeNdLws+qLY$|_PbncB}c zs2DkZ?SMY#9tTFXT%?oBTMk%JI<87Fw?v`{)qc88PU9*l27E(az9z9i^xA*MM}gSf zYNXOJIu5`)YfcyXT>cCRFtP#0g=P}9)2O8p#c%>Y?asjXB#5vuxBvKuZtM|lAPek+r{E{iVH=h7{Pmz>spuqr2#+fo_b={kvYTL|+%6g| zteGGdQ3UW9Vu;Qs&70gJD>ekeSQ|vy{$AD*?-FhF`(HbIP>+ z?wui%EmUNGzu3Q?Pp>J19yU0V-^gT5eVJp4w+mA zxGX1z;~xEQ@`6)mQKU|pLVc6MT=(_@qid%F{lV9d-3HG-nyP#f{_e|7xNkhiJOT>Ag9o-WFTG>wfw$f~ux#_P*_-d- zEc14)8Q;D=dwcu%HM{1`Sq{W|egM@cpTj)~EQ?%gg^#VS7+wMKxBSc z!4=raq81Uwjrz!^N51l zY5ismpR?<>cl&y;zd32-qI*_6@0kp)(U-VOcklQkJ*uQ&*Bj%9-~acG!xjU6(UIPd zg63a_!0*w7GZ8E?2PRi7KK>kdYS`p{`H#-u+_7rp_+bM+-E@{7c-L#M#pP^aUhp%5 zaRF|*t7*7tztESsF-_?d*U65hNZ8Gc+5p*zh>(p4&=j@d4NFm|Y67q^Bw+;aXEJ9a zg8oZwF$1T(Wr8| z?tG(PNrp$sBx!Xl?X{Lpgg+KkSF_)OVst8a`hptf(E98_ft7W(?DBMnL8{e{=$$vH z)a%fI3)NgWG@@kb#@UA^j@C(j82earbpe-zA8h}&p!x$aWm?|AeuZ*#RZ8`1M~|Kv z?8*u$67u!unQugW_%@@{)ekW7HdHR^3k<$~1;&hUU&q4Arc{MSMD?ybVMW%r`?6KgBNfSeF6E4vj61P_DGwQMB zTMQ=#mw_?rJBx}_6U}xq5K)a5>^gAt*u8t^F9>GK*ij%6;v{qbIrM7AnBEGUxYfS-fdGdzVfB4gf^$j^HASo`AI(q|V z%FI2x&%eK`%x_Vt(Q3~nYu+)SfAj4Ap?Mpcp59cmecM}Sw)v81vD9ufq!~2KT&p#5 z5oE6N%w2KYhxJ4AJZTb{%&d^`v!;djY+Re7MWj!$?$HPDy+bBi5DbMXT3U9^7-?Bht`i9SKrWV z=TkIl%am#`jNZ~Tc z3kY8x4HPFaK(sOjpeM!%{&JvXL@Je0r3kLw|Jl-IKRk16YPy&eNflh{9Iz1_cn#bu z)9BN^8m+{Tui*@KbFMB2h?HUpC&K!_qFF_rRd7R!)1_4WDRZz+CsVqXZP~HDIatzo z`|@p5iVW$aM26nQy|wV8+%c<9PM`X~q{`%IQ@^U3;Z|j@=DC%Px+V{k+WF|ia* zHxeB%C4|{!nPZhpptDzWhB%Vea z{eY!fZ>qBp9(?PDs_Wh-+=z1_eZtuVapodaxzqPh%nsdT)c>Eg!zgTJ{>m$Yjrpsu z3RdUw>sMZpL~Q?A)7*3G>^iSu+yAb;^k^NGNtIx%Scw3d6lZ)%K=05UblPYKcq&}w$kNg7l9 z=rUg?dh#O5WsYnFk1JhfD4aTkcytuximb5qAznwQqClsdJPv-~Bs(RYA|pR|Z9|Zl zeGUhYfLwS1Ho^-ug)6h`oYta!6tt?M3-BxGyV*kFHpm5!)S-LlcHv~p9u;JoPV}8W zCUcaN=-?0$RF}A=>tkW0rg*WssA&wi0ke??(fd;Ac1vbEu{Whdf>kP&X^Ff71QS(; z;H0&;W?HtBlr(Bv_K)bRZ?|ATNP-0BGKVZ3SBQ?knQ0XO!ccOYrnOa&w~HyRgXk6G zu}lej$vhCbom^aF+8;pN7w7bI8cyRx{{cGlUs{aXXgDb;dT;bzsZyswmo&Pho9Sj- zM-muvlEN+$c|7fz>DTNpiVo>z_Luf3`^)7H zX`*acgG%L#&o_9Zmb4@)kNp-g@r`gitZ=buN}e>;L&HxnP5YHapud(rXm}C1I6NMFGdw5id zp9Sqsw}=xFQ_Mh+4`3w;tm;V%j#I$9-A_Nlsehk0?Qz&%oG#ZhY!c^G+Er$yire+@ zkKjJ=Ex3=aO@Q?j{(uKQ2roaTeY`}<0HsW2~THYO4)HHTz#T=JNy!AVv{SIz@0yT#C$v#RkqBE?TRUx)e>@$^k24s!~ zqJ8VWKQV3EiSNmGl&}={57Yxil$26nDy>0(AQ_M|HsgipKTUpUz>Nm(=t+2qSr$DB zGTFm8Ob>yVaV(J=Hr!|xJ918d&pbCiUCL8X_ zyi+V$yA^&u^7?OnGh(Y5+#wTpu46?4E`yXHYuf>%v!f0yqS`68{F6_jn?Csjl%t7( z0>|iOAPfF6dIvlo@7M8XwNxcFBKAB_Ft-ElfEzp7=FmzvfYp>^pdi==3$39Hb{|@G zVvQYdz>$tQ>Ea*_d_+mlr?I1zTr3?f2eVCHo0dF#c5+&+e4@|hgZpgB;0Z_7fWnO% zn(FjYMGa`(E8=JXPPx7ju`DA`p_lr3j)vcxhMDBbez^E-t9{tQ8F)OCd%sqQ%pUydK`Al+coq zLfxkl8ie1L4o zaoLDri`yRF%pFF9oVM)ckQd*)=GeezuD3?*efiP2YPx%t~4S7i;Y?4`JQfYQ(X0}u+ zO_SvmNhC$r@XJQ6B7M5=4O;XvYL@~meF!pm8wzVW*sToe)Ebc-v3?koD4+zq-S1)Z z(F&?BP>w-4zlRTOfAwdY`SK41z18$eu`M{Hq1tHN zeErP>^jE9Dd3W!~KfL+!jaTL$ZLpd9c;V*2K-ymentt~a7(Ti8`U!(p4=ORM0N{qK zyC>dXiEh1sMxR1asHeqP3fv*F5lJVr~ojb1Wn)lYu5x32`{n6Id7vM*TdY~*mr2D}mQTS08t%N^c zg^P~>VorkE$%g9D7Q@qx;SmJvz^wskh|bY=!0nD67{`oifA$6Te*Ny~cVHZpM;--J znOYQe`N>8rB@1T2BwDhGC> z$;uJFJ`VCGtRzuCy-sS}9lT( zC%4Qt+b}tZD;=C{n60s)d^Bp0lO1DI(;tgn;#Q88YQtr-of$z}hPo-9xmMYvPw~6z z+*!WTn)Kmw_FdRFXLx!|sV~c2=kllMOZ%g*(!W%lVGCwBXP1SwdRcef03MBEJK;%) z@(ZQLHb7ny>Y>!KdPqq$S_0_j*TW&tMAy-qZ>6mgY#9s`@E?GEArb}(F!L6hCzys@ zM&HGaxZyHt5H*STAa;x5_)T~pOORC?O_ohuCjK0(amf7rZ{OAN=SP1$ zvo{EWzx@jsYg)X&eUd3FNoSU8`}fz%iz~E~0JX`KWzv}y+BtKy3bQ$=1<&=GXvoV? zvM|z8YySZ&-(RuoHp^gBDA!oK_rl)!gYP=?*GKn%X?)>J_}g!iU%u_h9d?DL!rTn# zW^*t@VZN&xCcTxe&<4#9zW&<>%oQ4~JO%L-88;~I3fYIBhuBCm>*28~;4)$l2pl$l z!Gbibo|^`UPg2&6x8Hqn5gWnya%2M!ODw*KS5qrvvWmGYtDjl3=9$%37ag?kx;poT zm6QDrxx|t;Y*s^Vir8eCPuWEEUtEXg3UDc~c)!jb6rXXD>r4^&stQkFK&6-oHCzlQk4bJW}a(IJRsmrhQ zW;pVDxs~bpDOMUxZ!qWOx{C7B6?|aK!aF7m-m!jCX>r4>nO;v#PO4O@b@@m6)j9xz zgPln(e?hO*8~=(u8s5~B-CUT55_15pzt&bawGY#y zeg0|d1QKmE|5a#EQHpb2{FM>(l-#B1n?K{J6@2Z(_uTHJyXeCN5yh=oIfCp^+d zLfCIJiav2LI$i4ZaH>wnI7H(|ULQV^$w&qiSv27Tm7D?ByNX?iMx!H!;|jyKEJlOD zXaS{6|HyTQPqHU^+_eAZ1||5Oz!WMTzW?*jV|I4_2BzcCLO zXzp?|9>ft5HEUIMa_wI$u4@Eac|-^CZ3Tn8V2hM0yO@K zwIv#)1Z9({*|T@=p7r27JO_$k!Hw}C1Y5^bH|XDo<{v-(%jx6uL-7Fk)1JM|w!M2I zlfZdUg#Mq89-?lHho|5v^Z;l|<+7!F<9!^)skmPkREe`D0s@JxoPHxs~IdpnC7ERM1wbJtPyQl+-9AV_Ar70GnWV^lS|vXXoTK-^=b}Hp35(to z7jXsCc%?RSACp8b#Y`|Fp_eLh44^n75si)BM^80HH^TP}Ig03=%s?FXJL&|G@t2-CND>*niCpz+$CwJ?)l z8-%BfhS3*RoGa7S>B`QncmYO7Px%oX0$+neKhmvj(F@};XfUz1seTdwx3{&vd~Euf zL!ZuU1fX%|r-#-|Klbwb!ekJ~ZivfIgmspV%0&EtVDoKo_;kb*nZ4^rME$_c6XTQE z6o*!39Qx~_w?{LPNQC(bJ_bf$wcKbETrOrWiP4hnML3Jz`UyIG zF*4YZ85}t>$X*JLq!)z4)QvT3AVxo+gmC0R{KO6FvB%Ju6nA8zJlF~Q_U+SmJvOqN z&Pp1dl|XF6UX%u~wvNfl;(b#bLjw;-yKQn5kHOgtzyXxBhi1afC0oy@XN;D*-N9*% zzFY~LTfcbG?%MqT6!|QJ-h&Nw3x@S7^VGW0FgguOqM8f)ndOUTjLk2 zbCr^0qf}xsr_gg>H^b+NfRo-j|5fzl7qH{i`SV`|9IyiJRagtpz%S3OSaA+mKnbvr z(3xAUe?}Cih=M^;N^zdZBR~A<=>CS}0x6rN-@1JHR(%#LEl4)>AN}cJxkq%Ah*KBz zcoPoIS#b`2+2e(<;8tpAsMl8``u%dOjR&9@BQb{|s~;VKwRgufI8l3|ZZGlxqLYge z8qwtDqy?pEJtzv0RRy*!#Cn28ZdEmx%a&(}nA}pvad%+P9b?b#+%)};KN zWt{D==4vbWHbbt-ISUqL?P+e_Gc)qhtT9`6y}GAk*W#_c&(gp2%a2~pE&)uRT=2Mf z!J13=-7#&`&U54LT$loKNBzdiRW+twH1S&al_9@R(YJc=Xfw{H{k8I~i+8o}d1cSm z#<@GsQayeA4ko_fdieOoC;_~Z7B;&{bddRf)qM$k8^zi8&g`Z8T4`n7vQEo~WJ|K- z+luWti5(}7bH|C}-1iANNr)lj;D!WJAmnO*aJD7Ta1|P$C6pFOxf@!V1m3ok5-60m zkZAMG%*u}Kgwnq6_x^t0msmSHv$M0av(L;t&&=~Y|1|MyL12rBHcM1iGJ#$lG`OL+ z4kDJbKYvRv&p{OL$8LGtwM8MX%SvJvN5bPOFP@mJ2)hzWgIcjz#qjGtyz2ck(z#C` znmhNQPXR+haO+^ExV^VT6F41juX0;VW~ZL)<2CuK1Ac?n7Vs2SJIwVOu7kI$jy?t& zQE~l?m7W;HN~87&pQqW$L_VxTTuV2$k?md0K`ju%2w|vid4NC@T@4})JFs>S>2pX( zqy^b0rw8!Z2criQ1SXHLAN%qlfO=S^1Bh5Ps2u#DXX@0RPH;m_qfWY&*D*A&UJnj5 z+Vt9Zxywew7uoTCMrAVdyx=jandqC=DXm^`KhGm(N?KCXnU@#f)G>cu0rs`Ff!^t% zm1;A$Qu-yWplLPpi_RgL&d$t`tUvA-t>B1;hqOX_y|hcpbuJ@(3Z>UwNVoN-AIasf7?=*A8z}FaxKP@# z61PV39-vIg`@r2@c!eWKTl}GF(mqY565$tQ=$q#4edL7X#g07oGs+KYdq*qUh;4 zJzV-crO4*=Eap)^BK&;L@||$IDeQqOMyzXc;EH(m(Gk;cJ}#@o;ueh)&3rW9g~CA@ z>JOu23Mo@M<;JE-d@6^Dht7z{{2+16M{}|^J6;7(_kJsKF7t?WM9m=W>${N1C09ey z%HlzpQB>QEb;0u1fXY`ItTWo+WxZ$Bxhv8H<4Awq@I)!CrKj#GFggMzi^UXh7z_4H zW8(%ldUOjZ25j`8#Q&pmhn_4$WM{y46tKHIPvqis0&H+jT zeK`W(QuY9wV}WWyJnU4w-%YfmLf$?-Da4!-Yzh)1JrRj^xqiwK^?$ja(s+*qaq+!& zcNlMn4u!F*8{@?tMEdP(D7fayYv$uFgbAKNn*_oIzCgmdYayoLeW&yxm&YGST03`V zUpSq8R^!v$uhDQBbokgltl_H8*R?))G)L|`a^w#_#Be+~BKMQ@jAS%iI(|mwLb9y6 zFVavK@<(EmW>ur!lf3~Ki%RurI1U}PAKQlAxuElPP5(7~Gc}2zE@21{+0S@xj|Xq@ z=U9O-X5}$U0Ez9stcC9P;k^ztKjI#hb9z!oe2M22#uFENN26zI5krW$LbJLm+1%u` zI*s5DqqG)n=Qc=}eUVq(b$iQ!oi@OTy4I3Hi_0zYc|$$^O541N9XlplIDw_rtCy6H z1~jXDa)5DO*3lS$Ij*JwoRyjMa7dRgRqC!_6>U&FJ>+A~cUnNsAZmXcs4o8m`6!lu$p=Ob>CXLBvCyV9!%F#HUikUmcQYAO>bZ4TP<9 zOfvdvSiVA9k@oxgVA9Q)fN;~$X+&&=vPu_0(M))aX2{E~f!qN8iP5^O;qZdR#=y`R z~Cl}lmm+I+Zs+rIF`ROlX%AB}qRy(R7CMIy_qR4VY{ zH$$&@c4;yNR*z)qIR__*9$`K6dY;Rpw^m92xVCugs2BjOM%4z&+d8v{crBm}%4rHA zaJ{GV(L1^hZ7=Ux(C7r#aC~?uzo35F>h3}%q`_CG7oUFNMnNgvF;n_}fUd05@;^m1 z1kn7qi9JizQXPnop)hJHUPi!DFe*7mNZ4l!_E1s++*?&ah99J1sfm70fP$|cy{G1LP{S9D%Rd0UUud_KUPoH1| zX8;ZI)Lu`E<0i-fuZg}_&*)1v>4h+|qdfD0uP_n(#HRD*x8(tq^o_+5^tYP-x?OMa z1xFd5pQCW+0S&B(ge&OjrrQcCAB@&Wv%E!2g}0(0m}0#(k#G`Z*i6Jv<3tiByJigOz~oF zBt@Ss7`B4ZkeP6ArG;TsypA)$CxK?E@p6qxwPEUPpaQS&G@Come-9<81=WU()Wlas z=zpG3YO5=0sUlpI2R5j6*D?!F7W<%={}G)m1I9-mmp*PB-X$${nkTGx7B~-IX$Boi z{&86Oqp9w&(rhqmM1_?;yYeNipvoBjOOQVOlV_yorr&2?(wdbhVGW(+^Q^3tl7`br z=H=-T&Vr(BBcm$jeh&7Om(#@>=_%FR&Sk&^EXy+wOkMaatS)e_pI~-6%~u{aGJLNd z+4mTUU4Xd!7{SZMqp7T3N(KQd$LG{>y;yQerNyur>VYqeVV=Tb*b)l6kzj=v-LP7b zJpAH;R0dXJ>^pD!!=HBS-2TPR?g?JLq3zIzr$EO^Z$o9|SNrzqT=`=+4KLBt>GX&# zla^%1ww)L*z`_?7`F-~2vg$5JOP+TH_`$pT4jkC`?#_Sg@YH3Tf4~31Pd|Nda+@|V zv-PO-+HAmjZ@mAFA9fD)?f*V}=XCXX>8aMWn}R~ut+rHkaGbr^Z5Us*;I<{TZHs#S zW0ASTPDQ9Fnoq|O4<1B)jLW$Tz&IHMCE1&z3E&kkR)drg&lX{kO%ja*0& zN)IPvdExaS?3oG@g&!Oc-6}G54&3fNFE-9~@!?oFXx0>{83k($Y#o1Wq>*J*ngW%@ zkFM~Ut>U#%p*Ls}I)A2kSfprpQO2)JXbn0AycU4Lt6|rOtbS5P;Pj%#B?>kJoGy&^ zkD7R|f3z?i>hsJNmqyfc!gVfIjEZcbpmh7)=ucrTU`23t@H!Zv^r#(HpmxBmkdkr0 zWJM-|J4hUGS#$7UP}Xb8*)z$_BsZH(>R5vU%8n)y@f>(L-M;nhN{3RXGc}l8sruG> zO>pyQXVUpTuP|H9+qP}nwkDp~wrx8T+sP9@v8|nV zYv1>++O68%`{DGdb8mm?TXpa0?thK(sW3*xydMYL%wnEf8l88wnXm4nLs1$VF1F5C=m< z^0OsOTsTCI{6`A{st_D%kTm&^5=GJIW^Y9UkVbiu{i@sYG83~Ws2;<>qZe*P#G8E- znL~<9SX5X;dKeQTtz6N(br))Mh6VdCMgMcO#W zmlgCpAM%=GCZR~HrO(EF7dpp1UIy|O*d`jiF?{_kL z1iLIm-L>4YyV1XBb&_g~0#eCdAnMD8i*VTrp|`PkKI|1gfG%-7F4~ly&yMp6J@*j^ zgf%n|udr@K609@35ia==-(d&*d}L_dE}ZIJ4*uIfC2j>*fw}99)|254Hj4T&b3Rv# z0$21kaI*T-bA#ZnQ`R-QX|8A3&U@YXWKfAy0>@^B*~B#zv2wIgjsurBM#+4jTPdC_ z2>zH!lg84RpfJejhbqpwUihLt$mrnM#k!Zwb9I)v9bL!X8q?eJcfyu>K&S8F+K3wz z&9wRHP<(CyMfQ7L{*N7ws%>_QU${8E9;Y1_51SC~FOwW|5AY0mFUQdvx0B*=RFe@5 z8`tuwWr;T)>lFQ%7KD;nSlchSy0N`u<@yHKTzdR0DGDiyDVD6d(lsUa1z(;68z8@> z3bLPtSQquUnQ!nMxj5FXSXI-#d;V&v^wf&W8PO&0s}Oh?TMy`5Ow!K#9=gNsf>B1mqqc`#*k+b^Ux~g)Sd(nm z$5~c5?)IWe*|rJdwI;g^4V#6z`I*J)kXp@d*1Ee)XS0j_>tP_1(oAz4)XHck^{Fg{ zie54eQLKMM6jii_f()4k++#RJ8v)%kOA4IUmLeUDx@D=_6YtP)UE4eUGU}LmBMu!& zT7r>6(6m8f?%+oSHAYpGAB%lSSNV9)f}ZZhSDM95%IDZIpR4m_F|>g1^ZSC13-!Ta z-q;F6=$JOw-XwGt$9C(v$8^b!qwfRI)A+&i)b!aeI;-lLE~8HoK%MCBvKUR1CY8r( z`m{Fiw=l*xz{E<02Z?w4-{XIyUQC*D)}wPoQ$Go1EL*$TMoB6D5=ANd~KUtR;v!IxSJN+jziV| zmS!+_d%q7SKA*o(Wc3?OsotPuLo|Q3lkd7rk56#)xw<@NuWR=0$Fj*tjV_0DfbnvG zyBwIM=Pwyqi-q7hJm3~_Q3PQPi0d=`%7TrQ<*K}ZdX7op#|xOXc|VtU!aK#*`rgWE zGC$RqZIx3tuxO3II@?ky=`?k#cmQ)xwDVH2P*AW~bkDdjC6o@PHM(I8eC5 z8I&o#Ev{7R3FC&q{x{q#q1_uPteoE)z%kk|3)1)+%QR81$CeQ#vJyHUzr9c(yH*S; zXHLZdSwyZ2FY-5u!p3V)G=fi)m>%RoZb#D%+YQ&%(PgdS4gXT#p({qULZMb`r%^z-PN@ZHb(2E7iv4!K0)6>CNc(zsDhH6!AvTZT6rmJPP_DWbA z<{-5uZf0^$XDPj8qJcJ-r1G=wU7Mmj%QoY9+Cm zchaL}2pl7Ue5Miam&AHWELLunG}Nr4fjwI+!$>&!F36<1!w`^^vBS#M7O*wtpkhb~ zEvWUsQ{$fY?5Z6jlTxrWIZ*40yeg~qvSdZlw3RHZ?DYe#mEFCqeAIk=soNfQ9;c^M zxx={MY5G0Nt;8gaG`^j$24K&1CQYUVIAFsI4tYsRF@FEPdGmIC~zQRn?X4RF=L} zl@4f-N7CE;^LI?Jm*dDB6YfEailXZa(=H}RB7Oo(tBBQu5Q|j`4MiDnWA=4TtMFR} zMt*{0eRU)3hU&l-s(TSv=c|cD)S3>473l@#AB`e`g_X_5Y#im(eBKSc#gnwTp&~ zlF!RU3z|d$#`ZKws~>EdQ0&?#A_%mdDaM355}(EG)PU;IQD=d;9m%u2vb%`y+?bO5_m`8 zIV$y4{W($SWX(qM%LY!3X6gqGKBN#%7!zxm^O`try(?0&7mbvBgjZq2pOqoTcsVT- z&7z#6kAgeLNQ7mu3sVjL(hw&a8f|c6pk0G8A+D9}WR#wrp%BJ4oVNaL50q?waq3Ru zjIZV!x-p53+rR10fh#AXu=$cFzYbzK`KgI{?H3}W4@@;m@x+7P@!|~z!W~E_Aq(sf z+EkvGKl!ZWHH+dca#Faj9VQk6x}J_9hib5d7S58hx&31bZCBjU==_BZ-a9(jqxo?e zp63aJgUoMKgC5w{Uik1&YM(d!xravA`p>3$!Mft4X}qm>=9kA`7KHEje0f9Y41r|` zxjx4SSs1bwYiue4z*ovXTXY$Lp+*zL`iDGXa0ABvah3sSy!4qSvL zi4oE93d9LC*i5>_a_+(tc$zzf@x10>&N0em3BhB#c6tT=^LWnn*6%L>WKwNc)t+rQ zkvX0nkc1p}+fPDKlgnqO9))~2p-lM*`z|BV$i-YEE}aSNO5b-3KN@q}DT4K_e8v@J zcLrrGHc51`i^5~-k|M!FRatDw)EcxQZ_+9#A36He4}Vxf4U7Y~&V>G!-fxDO-rHqT z49hO&!@6W1nW-*_a65r-gHijG7F%WJ&PnDs4N6qIG_BK1dj2Ij$ls2GK=nD86DlE} z)ch#Ma*jpZxhi_$I$FNdDtsm{(_*Kc?$L#rFgvNyqE_m8fvOEKtffn6<|f~ZUFvqm z)b^(V^&w#d3JKzS(pSqET;bRPbt9iW%8Mcp$(^51!Dc4_W$#ZX+`eD*3W!IIiy+2l zD?Td@N0H288#Eot5>7@&Mh!*DRkrcz+R6#ivDOeX$ z)r)yslFRGsKoOETT0CzL#$Jp0YU$Am4w@A6o}`NGmU0W;>aj3~KVNevfj`oz9VcEu zmN1ni_8b=S$d9fU$xOiXxBPV?NrQfa>+JujpvU(BTkFc>9Ve7{^%xEVZFYmkgiY&j zF)B|@7A?`Hw_iK|4j~sqdvFsUeY?8O0~PTv$~ZcgHMsBHX89__fSgS@o_2p`JIv@^ z`K)BP)XgRa|6S1?fC@WRh3PH4+TVd?V~LjU6~amUI6>4ADv_EatsJgD8`DD_XAqUO z%F6$^p%QDu9t|r5+m6z#o3+RuUS|I$>;3Wj7Z@63K<~Sn$mCiBUATtF_1hleo)I?u z2b!c*o0P!UInl@<>?5-xXl44EbtHN8Yj7r+J6whffhCiU9Q1rvT!eE6qqxD&WC{NmYTtXg0En8yr=}tO&trS7RpmF} zm4iOSkheF&p*0^;{Kzkz%|K8Q{Z5Ub0pn818f8dO2Z(;g6L=R>%s*bN?Ecy!x04*X zJ~yLj(YU3t@v#Ih+f8G6|K>o6oThpgg;KcB7u{-|Z!0-I?DD~R=h7DTUM}}~*L?x2 z#~f`_w99r|T!csB9MikdVOx{FE@#Ibd7vzPR;Uc0M@=0Z&#zhLW&yD5f8!s$-yg}D z`15IuLN;VTcpeL^5P&cy)Em1tby%qDy_X$!o4H_6GX?W0sU5{Gp(~6Tgd-2JlHS6z zq0oHM78NAiE$jba(d6!?1zqlIe{F6@c)m?u52=}_ihpo4lLROP&QO;Sy^|q?rb-fC3u?Hum6}s)Tmt{n3h{6Sd{7)xQHHS!S%gy8ZU&)D*t)a|wNOZ$`f=!i|Ni>o z!3?37a%L9klEJSXt3OyDo8)`&^$AeAA6X_>bdmEw?6{i}Yo5Di2$~{3=t~y}yxZp4 zxoj2h!xhm=u&n(4v;?VJRf(n+^c1LimCvDbfEe!M*<4ZLuIQS(aD_^ClPjaT0y2u{p+(<*hh?%h%(_ zK#dOnhyax5Z8}}xp2j=G*;58Nz;x)LbTgGUW>?McY-p>E25LQQBjC%U> zM%^=QTm=pXCbK=zY1vHA*;G3|)tJCu9-V8Dr{89Jn`!D*yp+F`t|$BthDSB>Rs2s+ zZPgOX!V$mKC-+a(zw>0(LJ;D=ruj%HIB|Rsy+T_+hf_6Qjdn-4M(g+BX!QLU&dYob zTY(fG%8A@n(HO;B4(^NR6WB5S^L;1hZ~gO@f7(dGGtW<2Ykj(DLA1sfQ%L&WP`<%{ z0Yc0O)&&#mvRFbG95)zsGQIadoZmYjTYgj_KWb;&l2R{7DSjeQr!0QTl*B?8;c7BP z720x2N={`-XZ_B*VPy(!#u6j8@Cpe)il?1c<5QdFlVbxmm!4whdzVV6-<=bm@JUPv z*na4&(xb8K}*;B3G0 z%6Yo^-@om)2Obx`rMD+hQ@DkCi#iSk>NwusJ*@e>N22Dx zonqnruw*?;pna+wO2w5>%jvD@TavZq^rY-c>HB6k+N8O+$ApOAu5)oZd-O*-2pwt^oc0$s$ehCgF^23VTTP8AltR8*&y@ zX{3Sf@nyAAuLnCzB98C!h)-v0ObGJrxV|e`eXmX}?F@SmP`Pkq)tk}a4{#7otu~VQ+i4YY*KcJ@` zf=7@mnTkFSK1|$ss=)5_=PlK_x8`Huw8yDd!aYt?fK&#)0<(F|iDfE1n>?v01h44d z2Wq#&*Oc4T9$$*Q3xl2jJBJW?`AoP)+xs`TvEV5j`ClET-h+hXJDtW*g>m$_rKTtyg+W9LQRHvN%fB< zwg}ZRZ_z`aN8%2ugfmIWXlrk?}X-m{v@I0SmU z?iT@oLMxczO-(N~wV}#1bz81VH8upLTQ6Ex%2I~l2R1@ozexcHh$M1aACKc?DwbV6 z?puFBKYF`#L7U_f@;ZH~c+gu4LMXE5s+W=Y52u5qh4Uh-5;6tsMM^f=?L6NdpqBO*+v+=?4;;Qq< zO5d?>(xm&yk4(g$neRl&W~{Q=V!I+cu?a`!Z~|M~2Ku1RTp*it${|M_{{1}^6aP|l zqsXiKYe5wp))f_G!x%wU?|-rYF0@+M<qQ{w`ezR;XuXcRGlEj- zJrJhYv9mija`6^MNF&d{{o`tFl^$KT>>nNyfjEyKRK%14g@VrweM}>od3JkU`wdw154l}2Th+A32y-zT&N$i4k5(th4d*~>pKcBZ#rz!x)e$@xayog3zro17Sh z4_m2sCTc}db1WZ}+>C^~bgj^j@#$yP3Z~^!XR%ObVf`HpgoE0R&nHeFd-44E0C)B< zjVM_AP8$n)6f>P&1`?WA(BeGpbf2V74}Y!Uf?|PUQ4lD?oU0NcUpT*pv2jcr5rgVW7ji>ZjPw{= z09}|c@xBHM&xf|1h__r<;lbOq+6kp6z!Rh zak@|q(|V<7k>YuHHcGvBDwHp&CV!jj&QYy!+`+-0x3f`5kH5Jm@?lXu)|*E87xMO% z>FoZr@B^JP8~GuGhZte780f!AgQHB6E|7KC&ecmY$HJ=?OPON5Sa@+OxDNJpI!mhe8s!VE8o>vVW zDLkZzK&(EdtJ0jn5oAfUS{utL;JK0sQ9pnt@r9g)paR(*m;RNw3oHo>scyh;qdi&Ueddl z6GS9FX$2Zt9Q#Ft!&^9nF`~z6N&}1Y7ll7eF@OLJAM;m#1#b5V5wHn!P~I~ zp&O_>{Rt=6$rYknGe4aEnVE3~wisT{wlYUs4@%kAf}h6UL2F>AF>eSn7yL2`k>lP~ z%H?`FodpY9Am%XZ!pTal5IgAe9$SakZJWAS=1>70+bL@;zRTdLKh!h!728;-pHM)K z60cIB$O#o2j?VvrHYY?L*fGV;J-r?TNu-{{A;NM?EXr;Qf(tPM`~g)%tT~3{>%}b= z)?h%!QB*V!WnrT?M6PO=WwHSLR98s(rD%XQ#bUEeT~G4*VNlFa?7$!3O91;&iIkN7 z4S@yKIgtF1iZ#i!8Q}au@sDxy#CzfiWoQ1VQ6D%sT)gYUK2RL1}Qe!8lCUuDg@ z(Dkhz*?kX6*3Sk=%0&W8qjfiitY7# zS|aE%cYJtU`_jp(igde#%Q0SLQgHV6Kgo4@x4)PiBZc>|)gs{YO~G9@{A!&?KkZR!982U0^cF{&Z~jzY+)mifl<-j` z3We66@JaEvr^H1E^Q}NE;&IrVrn;#A(Hev$iT;;B456MqC0l;q(JnHxKqV!o2im)A z2@3>zB-7iKj^xjBf{+1#SYN=i?KcPZ2Ns6FMfH!ee44xf3CeS%(YX(HNWUx{#yYCa zz0rDBbeKho@BIyFSo(sxqv}@??{kUsl5f^7tzPz_U z?(cqu9~GEdb`U4#LBWre^vx_IMB6MX=p1m@ti1h`5b0?Fe^C8^dxa@-eZlGi!!%Wh z>TnMHLOBBY%y-6fA3afIUZ4SAWIm!+-54175ZeevSF_&xQWQo9AMubGn@NY^3m#m$ zM_7UIEgLIF;teZh$-lEdt;wfG-snS0F_*K%JaU=W48o|g5E37Fl zexM%cm+P?W*e@%rt&(-egFq1_9CjEq)o>TL6j#~txmn$UL`Zl#-5UR z*Z~btbX}lpktV87Kn2416yyrcm7^=zmeiI+mQerEZL5}imL!(2AL7;^%Me1%B#m%% z_Vc}PqOqDUu3@tHTtq{Ol!MihHOQ1rnFetv?)h@vlw&9v43&Ix8ndQrASFZYsLvQa=k&x5{9vkjk<6^pWHP87tNU<<#jYv znbf(9aSU~ix?wq%gfg$xG5)z_n3hZzD7^msX3Hfi57UBWBt(qgCYjsFr~$B(UaklT zGvK;~>r*jyCsP=hU>vuZo*4}lZ2tB?E#}T`S?wGLf8*?6&X>;<+dwZBNo|=5OQa&R zqKgRQM7WHziA-WDXc_lfJJdiHfY^0~_ymDBepGuYnQZ$AU;_cmAMqMRnoqn|IN za~5cmttM`bMh{(>n++McGkmb4wQi_r&0YN68-%W1mvG?TRPjH;nShV&IOWU&^E6^i zN9yQlA(pw=hwCN^d^ovaLCC^_V3`F4scH>)@R}j$Krd1guI5t9g8NbUw!nfWY|Giz zU^SSQxYY<*gGv!08%d{c{u0CEmC zqok%mO-#iVmW;4C=~~2oe2uyG*T##|jMb)Jk@DM7S%|93wgz14Twi~sZ8ioGGkWbp z3yORQbnWRE3);vfRE5%n84FjZFsWX_(j~acSh&Lb9Um+ zT(o7eA1e2gH68;%RAKj8K|nw}vrP<54Gj&Ac=`5x#Y}norZph#-64_MjeS>sihqB9 z=LIGGfge6HG&BY|0|7Dp1-ts6eN0|v`}_MRZU}#JVq*uAj0alLfcU^b%>26_t1e@M zCWKV$^}rjGMH`OJ2Cgn8n@k&34ir1CC+LYJfQuyA7b6L#aIyZt{z4om>XYuSQDaf# z+igy&mf^4L>g?QEPMTV@*f)4fqu{ah)-Rb*R5{YA;H^=x4L}?7bWTJM#gafp<|CtL8URQHJHfb(q8bfIkzRjPi8E zbMR8VCO%i53l-dWqL7W)!85X@iGZepxh#AXr{ft}G->vWSuNRN5^Sw(N`&AoGqn9r zW?ij-z1>BhXKWad5}>P%oBA zee$ustjIrTy}3#J#9{C~Y)5W=Y{|Lsq2}=SZQL~v=p;qh+u$8)mV&;8?DObZjaP?d zlSB6~;@#)mi!BFgbrwVU_U8reVvKW{6N?`>pSwu^2S(U{NFC~>B%(N9H}Y74d)g)3 zZJyx0)xE9r9{sy>F>AL-$z3zT{X(7kOKIbUt*QE8b(Ac`mrjq_)4BW?`0gpA#!?^R zkwYi?Y|@*RgA1-ktcN#ujrZ5qnNnSaRw&rL)@L3|>%ge;r`OcE3{eEXz}`L0uWR9$ zs+ecrFX_+T8gJ`TsFpW^kRx`87d^oqHBq`g#R&IletSSyj9WiXNXv@G^Ckpvi9n&I z4$vcKCa%>x*Oa_^sk>$?m=jV1}dKxp*&ViPG*)QjrQ0uzjuF1Jv zXGJC_;B;)tT=x;mtF7=;xK9G%(raUopur&}_j*-Cr>VT}>l7Yvy|L{Je$yw0GAkws z({puNd#LNzjcUrfjpn^`&F~20d+V89lIo*6Yk@bmJ9{8c-w}?4V>K=O$21DbnD_uG zx`U<3DoZZ>w^kZ?h1vH@zsRmWeMk51_3XW$ z{6b#f#CIbAjt z6P>vW21pQAs1%~f%33&g=J&z!b^+caq?CVV3j*9fQAU+`x8@}IG0l)>+R6Fti~k1A0lx}g3RIM5(;_7glACnP7_}~@6adqq0^mZA6_}&IxmpA;=6qmVEhr4nnmS-`F-5tm1q#+j|T$?PMrAf4f?AwxMiXNosq8}vUMXb zO`+a0>pD>$lj&N#?|pz-XI2J@AsF-4AGtIctJG(tjw|X1J|rzDx6bg_HqON@584r< zZc|Lq_EOpBkDkrB*Ct?F95?v3fxF_~cBU9v>67Lk8?xJUOB=z2I$RMtdpWW@?E7s4 zRz7b!7l9HmnI44>nA{#J4u~vU5rpqI)&d{OrzugpP&YRq+=%-DI2Ppa{1HI6NbZOV z7w~^1K$(ciykWeO6D3!?kO0V*xT0^)d!C>bR9=OJ1JZMfd0!X>`KADzz8Szf_T3C~ znXIct;U1pN3BZlOVRmTmN3U+a1V(og!1vEuG_X4~b@D>*III1~NmaGMP};d=`%K4p z_yPRB1M`8-@OGgG!g<>(#&uv95$5idQ|kA=?2g4XXfLnm;xA{ydwjlu2#OnDX@CBm z6P0spi+!#h{kf(v3&y2fMW^`Xc_EpyySuzem+avva!P373*kzO% zl_qADVt-W;Q=It8RE7v|s-@)V&Q^_Q!@4(ySBYEcx6a~{oy=xa2p%K;wjYhRLrr=r z77@>iBZKV3){V2?f=e;$Lo@GGbC8v0RKa-^SP_sOL=)`tW?($rhr}C{%F=MY@l1lx zHMwQV;v%(cmeSo`3ck-X3-R*wmleSZnow{;6?L)nx(bQ>1kkf=1LpV?$&=d&9N#JN zkT#PDdb&ZFdgd2!uipR;g!@BtTbKl&Yq0T2rwVmnRLo$2S7@2RsvD@tE+Kwr2f|e81 zE+oC^^0xGLvMDEMoV3PPxY<;up%>MRqbW0p9*sgXbiaTc%6nWs6u>0DDT?#%zDM^< zh)WBOgN6$R%B>l^?#f*+M$b90FYcN2Lvr5_mcU-jgn7qtHvRI#VQd#aI|3gl6Qly; z=ds|hid)~BrR{SQz<~EW=pexLp5a05jgbFJ^ock~2EP;0Z}f&|#DG67vF97}hW)@h zW2^9wR74!uvp97M*E8dsI;kB;w{2;6uscO&$Bo==Vl=lyuYwL=8lCv-==e5ZFR zy!huiUgZs5Qt=-RU1QtKdIbboKn$bhhxrV3AJTRgj%B^?yMef*`D&QH_A62X}V0M)&MAU{=7&Be%INeD`-&=u28+3{x3agKlm6|5oa`0x?IBu!8}8&wv||)m$zgk@UH3RJ<@01ORv*&UQkbKZ zZfy{tOt4F&Jx3=#pY~UA&gvR}OT30%#Xtzm^tUHcX(ijzM!xP7WCy{w+cyKNn2&qT zcNFx8dVwhWAp8I`>&bKdul$mGigY4>2IPmV;MC7hI5-4DelQSxN>I6fxnfGvt~II< z+GyW)v7Ak@;kwz^R<2@y`;CGj<-SRPrt(_rwGn1Hl`JVH!fg zZp`inHE_ZK2MQC^24OkLV-AbskJp)Xi26(3u#nfWG2BUnzb~fiV$i#^n2v}7beKx+ z1lsxor7CUR((g;o&WoEq=slB!NlQ#ikGxR3$aC@ytiRrm4@;Gf`0*F6 z2Rn6_6BSmEXX&E2NVFqL?KGOhnypc<6EAf|rP`0X;wmy!tPo7orDiHVlDfB8)wZs14g`Y`>YFE8D+t!j+#PKjUg{YS{_IVdIx7*Li&5~fuqR0}m zzAGQmTp66he@C8Tn*nY3D&PF|^*Q6OM^3**Z@4PFG*A}3z6qH=LB+^39&TZ0qt}o< zv;8z6To1+@-PAISDX=w5+oqD&QnP6l3^Ou%8n;{7Qt4ue7$>LxUGW)DOnrV+Q}yu~ zmBml8#~&{K@(ZNfz1w~c8dOxWpM3%^IG728XeIX2dU>7nZYF1`OEnd^%55d~kl?|r zrbMt@<3mVj`9Fske-zcjr4GSpLgNmM)xpM!UhllAr@tXx~~U`uE&^(fCUJ*|D+F>0Vub_ z(MQk#q}yR?!)*ZC?Fh9IxB&5XX!~#-fOaQlMw zLhlAU40!;$ZunmKKS2C{3Ir1lDFDiDSYEh3e)vQ81se=G0NQRKKM?#80|EsG^8m9q zm@hOR@LveufdPYkfZZFy7lu+Kq(6+Y*i*&`_Z9e#KVdb8jqnDPbi*f|AZmwW9Zj~t zIYy=(UABI-4c9o@Y(egZZtlCc^IZkaTm^US+qd&v1^Mjjw{u*DyzgVhnLtl! z3W3R0?}N+l`?m`a1VZf#c`_0NS2@CzIYC<7D)Pc1j{Ulkb9hyV;bA#OM^}k_s)b)6cL5H!@E`bJ1pi*tu)tp4EyIh(2ksaCchL86z+T_2z>9%2G7^eXCUbHL-jP)# zjB2qFPJxp4zZG|gn&MbXlZ{aJl4(nqjo{Ye8cUmv@Ey_31@~sYOF^Cm`DT_&;jRVy zW}ZtSp9TG9j!TjE1*}+=-+xt!Lu4x#z~vVFn+5O%p%#Q(8S#ayETc-T!p%<=xnmH@ zegP%9qvA?UfSTNKab>7LQSRUJr7A#G?pXOU7N9J5^h~J>P`7g4%Ty@`XNgpd&RQkH z_Marcxm?1}d7_BzP(_efj8)>kSunaeb*2m!DBKxIUn&Ds?u?-?qX9~HM%9+u0JS^g zYRhne;+?4oAQcgO!-c<^e;jOAp@-*WH(wHowq-r4&E}|dwA5}^t$+IJb}32PSEayTxbHfb z@3pcNI6&mMj$Kyp&X!uIqLzwul`Ztzutj8D`R?w8!<|6o*d9uyG`zcc6acwajBAYE z;U$>L%BmSps#5EM<@Hlh6oBoq_MJzXmp>dzPu;e9VPITpQ6E)fS5=neh_Mzf|DBY) z#kE&CI#btGv20oVz$`wm-JF)0Z~Cwwy}$HNx6|Z1(m74tM11X7oZ2WjT8lL<#~9R> zSih9ljNH6;XSqOo(dsgAQKi9?&xBt_Ofit%fO6p*q$JkM887nJ=fm-`sDDg`61e8k{}G z`>9v^#``})6gz_nC!#`fF-pL7zinD_@~BO&Hr&-;HY6hwgPf=E>z}Dv{lVdNssh0F zy~uE~+JE(Y7O0nMzVfYJdwB@!iqcsR)DDx}4^K}Te(nE4A-r||;ZsxDLNbQEa+zmm924D!y}qE`j0(cw%8g>VjGXG;^1eHX19qvnK|DWGdK8c;mYF~m^km2)N0G# z+acU}PYg(|{q}wgT&0F;lYKVrSRjl7lNxi@9^vdHWg?@vcaFqzy6{h%&cHL9i4I0^ zunBdDzvHr9I&{JlzVJ_-=$SEYuwxP7yA?vg4<$dSM|^QS>cupPrVuR(napy9y@iF& z*m3l)U$td+VLy|BqiP&^Sr`Z9m_Yn-#`>yUkNa}-cG~HjZ7dSkG6IELDI8(8bQPDi z->SP6)om(@U@EphzTquVyJbk4Yq$<6@~4ehvUCsYYDLX`=Y(f>B2;}2z7bE!i$%n3 zSG^`2y*!wcqk|%&^;%qCdxm+4;CJSFXCtSu;x8C2>3D^aJLB&)eeU{WRiT+Ob&DeR zb*I`{|G{yg)xF5QO+9pX&p~$!%Ki4k`{t-sMGw{RX&VmCDT&xCq{;E~y>p(jCZx9f;keo|<~ zil$7BWv7x}^->yY{Ab&MC zA-*>H_b7*h`X`Tzw!zGC_{SwFmVX8BH?Qx_6Fpe6KXXQc5g>dSC)2|FIpOG_Llzjy zAr$P53h7~iWY=cF1Pr8$`&G+jxo3wPc;~!T87GXG?<5SnD0jz}TahBLT^$)GEXNmS zTvo5fSW%e6bzGAxBRu$loav+!B)xs7kP;2VL6V&p()C6fr8XsJrcP4kRFKHKlD)mH zW36##Qqcxkl!!j_8!gW6t=5$C`OF1)2f#OTy04qFwZB$z2qO;t&twuT~;5c*ENEE=ZfA)zq*8CZ8#0$}| zor^Y6snM;KG=gJrW{*Ad{?(bJZ6$y=Y{*8|KT-!_@pPpp&x8KY|ZxgYgGfzq(Ts9l~Usv*3=Q|~qX4|Ok4XkqnWEbrn~>>AO|v9ZsgUe*QZ5OCj3PM> z-8;ci^6--vmFzz01Gd}o;Wf#`_5Gks8WA$8zsiy7sNra(XlhjC#pzRGe(!U)Y9_ub zE1dDNFqVz9dZ2PJmdb)jKQhtg4oy4Nv7?dQtWt_8Wt61MvvAVlsKnHwpsB!F`N_k0 z@iFJx14n6;v6O!r>mnTlW3Ad`5iGU7pG)U0YM`u37CmX*QjNW-B- z!1H4e7ZZ^~5SNzA!WcIu+NT&}ucK{65&jgGHL9m-$4VtL|5vc?zk|>Q;#x>%Ldg)s1dM-!%YPPQiF<5k9X{l5jPOl+jaRu*E8bLP8QGBqUD665Mi zu%~&7yewF+|5wyQ{C>uAM{Am=%FBZ7y81Y0xw|RTL;ZdxN`;*5w3<9;xwt9QRXu6O SdSQM28?+M|D(2r_;{O0|uQ74} literal 0 HcmV?d00001 diff --git a/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.woff2 b/_site/site/public/font-awesome-4.7.0/fonts/fontawesome-webfont.woff2 new file mode 100755 index 0000000000000000000000000000000000000000..4d13fc60404b91e398a37200c4a77b645cfd9586 GIT binary patch literal 77160 zcmV(81_!itTT%&fM`8Do zgetlXfhX-f>pHa>CezJ5a+CKJB5E?t-D3Q@I zv;Az_{%F*wqQWVk+*x^)@=9sx>ldws&U_`?fwx|)6i0%hGq@6No|Wjj+Lhc2#LbXI zik@&>S#lthOy5xS4viawbfqcF5t#22r#4c;ULsQqOn&iMQrAORQWXh`G=YxhM*4YN zTfgWxZlU6?d>wP(yNq!jqfNVxB}>Ww7cSen4lE1$g!lMN&~*PN_7ITCO&u%|6=U~^ zD`NV@*N5j%{d4(V*d&F9*Lp4o^=-wV4E$&&XJX#);dbqZ^8pUYCyEa?qdKs=!}D|N zZKGn0G1#bWFe1l-8nC}AR*a~P9;0KUBrGsNR8Um3F%kp&^sGD!?K|!B(qItgwkPpO z4nOg8&Z#<)4^Bj%sQjrANfD$Zj098^i(7$$Vl;{o&HR7r?C&hE&b-&}y`y4mHj%mu zNlfW!ecOyC;56fuZ7e6t7R&P^z1O9)e^Pe=qGENxwk%7Q3&sYU;&zJz+X!u6Ex^F$ zTu6(Z`;JIR{;Knn>IcTcKbV%&ZSxB`P>8MADLLm#sD>oQy@;IWvGh3j=*Qa5&VIQ& z#BvplZofSw5gN50lul%1ZW|#duBPzgJG1nxIGMaB*-obI9wC1%7zRoi%C^%k;Mn?+ z?pUuq3@j1^4v?E3B49cgqW>EY2?-#3jqje^;JgycOCcwp0HG~LNR*rji6bO_n_6Fl zxt$OawF6EyR#iAg$gdotjwKXO)cf75+S~gE2n>cpa0mh<1W_5Hw7c36opP+~qRPFS z?z(HcYuX#9GugKj(K=EQB_0sAfiipahu*36k{xIzyD2!y5%vK1@c|DQ3Q0^$kT!Po zBklXM?*0ZWJJ6;!hoDZHGR|mrw+{{o{_lUy{_6}+Pm!l|BNl}Q;&@bv@2Wy(0-c_O zab6Z9oUWgiKYRW)Vv0%P;3X|rT9E6xVx&Q%6AWJDG0oX-H5vJ?>5A8;PEnm%C;H~y z%@URb{E<@x+!!CGA#@@j24G?{>Gvg*2lVeVHM;^7(Pnl#tDV)(Y|gCiIh;CbXJ$WV za+~#V|9GDufDe2U{2(L>iu$ z&FbBmZ9gV+TlVF2nNyNeYL2HloUh~eKdpS)>J9Pm#Xd(4%myqFVno%qUa9n|Ua803 z8#-)?GmgDZL7HHzH4B_FHnRat`EXP62|?edFIDRb!q%9yytA|?Ib5`-)rNGqg%GbH z-}d(Uw;KH$fouQgEh;fvK+gfZPMGsl{cktu>gD1?zL z`z7_05U{qkjReFC1qI#x+jpODe!iG=?eIufIBbyAS`i6yq~pK;J!P{R?B6jf<_85Y z$&N8sKi05v?h+0-IZ#Z-(g8koZ#f{v7%?Dp!%F^s91LTw|BvSLb7Oj@878i9HK*kSp)6{%ZXlv-PQ)RD zE`x4f_xM$H9{@mn{1`uWwLbR;xgELO9FcMuRbkvnQXmT&j}ZE~*Z9?u0F(1c4Md6G z%ZpLJy?$`%3V_^=J3F{;`T31Z7#Ad=bomK731~(`S)uLTR8OErP908ueHZaDB4D$q z{GZri&j-sW%|A#W5to*SAH-ai&E<86{%v3LDwPh%=3Mm7wrS#iOV1$&8oKgshx_jMlowl4ED4$f#L1!t6C1g9p~=ODPt z5-F*yQZ*RmNQ`~4r~k{Ouxs3@+Z>Q5N}1kIzW_;y+Y`2(U+=Sj1(9)2Vkg!}$DaT~ zSw&5w0~|KUc7%a7st`^}4doR9Pl!$j8b%9FcqlQFIssg|->XC5YmQ@}VmJj+^a&GW z;TT&?6ewkE94j()E$+}^)|h0Xjx{@?P9)U!BBDsDj}WU31 zAtcV{=d|bI-bs8=m>_-=CKKcXWW_GX0~^$^=>jcb2lM)283`*Z!V{7?x-M-}_~|s` zV|lNhxg(2J)xt(s?g(|g4crMAX)o}cuastffHd9kY=i3#SX1;l!-O06F-4v5y)!_N z{n~32h};!G7bhd5ytZSkz1eQ+sUW)X74K7DJFF%9?n#Q!!7ID?F7r$p*h2z%vFq+0 z9=`hOhOu`E+Rawmf`Ea#sNtl*!}&#cW`0Ouz3DI?ydh+i=s;0>PiQfT7Zu*A>rw!Z2oWMZdTlLANQLT4}czIhYZic*axDrD;QpTldic#?)QnYZQ#V&@GPdWKu$ce zkR96D(D?F+uOEL7E{&8{@#anN+7VOiE7M#=o-3l-Qlfm(Hnj`lCvjX<;N1eImGc}P zIfq1q23S0QB<*mCfZhipyXl3dlKdo_(zgrVEctLByL0)aRMXBH-Ttp)yZ_WqYe|tF zU*@4;)#eID=!hTcSCgMs|CA-!(RT=~eyOCyMAVSk!pq$%^Rswq@*cQ(TXI^ehX9#d zQzf)Vo7@<4U`9OSg`E*=es@n8G*SbT@I9!qVekl|qYka=BE@A6$s=C?(x-c+DlyNW} z6eaQe@Drh#XmE?Ex(!VKoZcdgD?X0w=CviN3tmmjikMECbJNHMagMY-l@hQIzV7AZ zriQRf5j1k=Eh_KlCFt5{BiAK6a8T){lxWsNJ@?M~+S(158s#PwDXC&%gvLuu_&~q; zp5%18A)_>(Gy@` zHu}fy7?5gdqUqRaZ9G+VYFVjT`f3hBTtJLx%QHo4W^k7Hn4dbj+U@EPSKG&~pSs!K zvyPmU&Tyr~vom3Dulo^!F^FVgi})a%1Gn9)rTvJRN`lw2KOkz(aW}5MO~dBSW@edL zwPwp4)N=wJup1;S7@U)OkZj2gQGo~o4#o=@iYEeNjFZoLvW2r$?(LKzQYnI52$jlzP&K3-Fs?@ z8TYz{a*Ip6o|)y)qHif|*~IjRGj3tOR55>Cr^87ZMJVZQz4x-c--DZz!bJ3J`mBFt zv$MzMB*TT@cUYc?%vG%XC_t5juJ=v#VIpp<4lLvW$%%|VH?JfU3&D=q@FkudiARUh(d2N+ zWLd~2X5t4S?fb`JHk6Khs0b;)4m))>Bf>MuG>~md#IxJ@3UBxJiBI@&t;m6*b~tLF z>Y4m_C`-#PTHIv21B#D$$;E^HZ8uiYUtFhV*G%O%3~-xR^LiE@?1e}-zAdW`mbEM> zF-u5dt!0p?EOIRw9HXESaG^}g@5b$*Gd<>1m;%N!sdSMt*}PbmYdWd4wf_iOfHlC+ za|MYGa1MylQ*%_SxCI*3>pCu7wYNkflt8fcEw)9s%#j8m5R?-^jqs5&y2-XJ@J1PZ zvCEQxGD63Ll8sRsnbjBI1u1mJ!>4@OBQ%73++6qLsDSXuV7F#t5G=NzBh&|HiRm#q z*)7%le!&>OD#^0421Im4)tJOE2i~}o^A-DsEaeX+t0KZ z{sQInfSneVRDtp{f^<>g*rTZi2sAuCI!Z9Zh$ZFSky>G5VCcOA>UPbn{DxunR4-Zq z0{Rr3Vcwm`(344N37c0jkQV&${exerkPtp8!}^!LNFtPq`QzzulIshDd^c?rMzvmA z&&_^jixC$vO7ZGm0Le*_7u+*exgqHorQCbdJY~!;JgCi-!q5HtGLD2^A9dP#_`PVfh~Qf+*{6POoKUi6l2P%*Hl&QKAyfLqkaIKd`D8JY1@={Zhq*1zZjQU5-VVG9EdQhh(N}S^W*!YLJe?QZ~`l?e_yw z5+Rt%0P61dAXbLEnF=K$2o+w?V3$raPx6eS5Bi3KtXuINb~@n7ggV*iUfP^;*T3fx zK(YWg|IErMMW^{br`nI~*hvLG+;Qa(JTE9Xz2mD|`K zWkMsBLSxbz*}wwmYD`=a5~IW|zFKINTi5zYJdLXS5AlQ;aj16QewJ%pn@7XW)l@{k zKU1m8+14)_#x2y>CEb#Vl-cMv42b@BrfGab7RyPY#BuR=W2k^v0h<(f44SbZ&kQd& z1c7+0f=Eva?9UId@{fgyyLhy>XLZ>Hs_gVQ>JLK39^$?US5+# zF8FwgP0>wLKjyriCrA1t{C?ppovgaV>1c~smv@h!4uR$(`2`$DeE7c~B> zpO)wsEU7ZQ#)-uJ6()96NKJ8Y@H7-Z0#aPGy|SvlSYbSo*fbFCmK;D$X{<=pL|?w> z37bU`XR6OqiFvV2n$yv2RQ}kYO5LsvtCo2WW6I7VnMg|XEFd+Y{o1b`B?Ku6B<2+= z&U7;n*3GsPjMqSY02HvKv_gCJS?}VwnX)lP$9Q?8>7cln_TCYaRXg*#;^hb%1uH+IT+qbi5QUIEkAPwUL- zZcK{joDF?6iF-BK80ny(qch>Bj2#sVh;E9olq4i9E2BhC2h@ZuNbOcWnAb?Aj+ol{ zPjg%dw*~)|Ezvu`S2h4n_?1nG-8izHMroCi)H}Y7r8gOC^D?nEB?8ux%nux4T`W2w zjmomxy+te?pWb^_g#G~wZee%3vH68gXQ75Jt@23+IdVE`poA6wl8hR#JV_HpwK4Eu zBw$Qpa>tT{f!Cet&Rr4Zc;X#7JyIEVCMr=i=zs(;dVe1C%lLUbh~NS0gJ4a3_SBi0 zWKV|KrDg~RR0H=-#?#LMUi65trDJ==U20Be7 z%Xwpj z8rGRuVi>6*eIn2 z4sdTqnx|BWhY_zMYaCA7zUpjza))jPvt-vupa&k7+<6n*ist$5`NN|BwO~KBX%LYryjwYCD`L@BOz&Y#&6yLk zrl09#3<5$~a4xgYhziDTTr}+GvxUZ_irgNJWb6?^#5mb!Oz(fO^4&7G%H z5^GS_GXIRAC_Q6#bn~Jjo?A1S$rmQJt!U~*P6dbvJ-70Rj*C#qoAg1nM--Cz!Y317 z=u#u7#!Wgd*X$9WGk^)j?$&fleixkNGkSM;Ai$K^JD4}R=>kur91A#{$yq51$wX5{ z_^yQCFMy;I)XX=RX%FBGjUjh=$~M62v?QPtjW|Ux>QrIgjQe~*2*&>nXZq^b5AiNL zZOI)6wC_3KIl*(?NODXbHzum22a=JFGaEv41mKQ*TW=5nCK7LT+EZuu)vXw=D|?|q zMZe$WYg*z7q#{n@ie%~;HG`r$nwUvewW8XJl|HLR?P9D;g~!gQW+^ITmZnEFJoC&$ zpqK!kl`d!W6#u8;k_s8NrGXb9K``UKExyy)qZX#Ac7FthR3Nwo1`lL3ODL!o z#aVG+vZ|XXb=~EAEWJ7~DkOX|><)vPi!TI8y2~t+U`4!!=-3qTcu*UzvmX| zU;vxoFY7w$fXLF*)+alS*@;#LhY>_6%d`y63v$W)kPx*5f^bYS(x#$=iQiEsSbWTj#TRZs?$7t8|iN~L%c(PyNt zN>cc8olk|i&vOa$9mc_tq1qTUO?Q~7+#U@N=prKaG!!!T;ppICO~e}UM7l3dA&J#? zf-}{*xAKAEE{qjsE0aKYPnTB6aq63DUe`n4s;NtDuJ@l2EaI^^NCY{ITBxi%Cb)05 zg&!!x67sqr4))=f2=^B;|&U9nAtxK%O?JrH(qLN-KLYGA2ys`5Pbca_F5=9yX0 zI@KWOZ;?E|06C&Ni~*hajz+-M`jaFaJ2KXs*J`w}5c=M_?075|63ZIOft^DH#ZttH zbQl)6uo5JL99BwZ9>Hda#W}|*0Iy-0IZ%nKCgAwd#WqiGzSaX5Y^gk*)brv38S)wL zWOF?u0W-yO7LT=1Ezn{_pw#>#jSuWwImbE(F^wt}}lf1z<$?f+@!t&&enhvFSp|oAa+s9!U zHXe30?GjS`pv=ByF^BCWSWJbRy2A=eiD6-y5fj~pEXMQfgpkY{A~P+|N8}+K%cVH8 zxAHg&eBe|%Q{GUMi~=9Hw)OFF98FTLS>9sw=B0b@E4xqqW!sxF_VU+f1*fUgb*|_4 zRz3PvJ}t!oYhpH4pAwRi(5Y}*;!VBKPpDx3vfLzB=tRMJ8;%jV@j>6aqg%i<1&#b+ zk^D-3Kdxp(KRuW4k%?rmuP94I&g0b4>O%zd6?@oyO6liO1^U`$YEO(w~dfSW-)I*JFbc95RKnhH_Ueo)^V z5O<-H?_2BbD+u?V6s?hlkNW{&D{7-4R^P`fkDgL0;{mp{b)#&5Aruay{_1@GD<`i@ zS^hSgHnz=Q2J4n}WYT?K1Ba~KTmN}=+nAMVj->#wyKf}M<5@kRd1_Le5osxl7MTWO zkkpGzVMHjsSp8MXcS#7V+PhkS79{jH0@}OoIU2e8CV!dMG+M*m)+daUL`I+W-4I(& zUB!OpWEez0R`B*0QI%Jr&CRlbeRfkm!A=eXZTHE;D+5#BaqzefNU;B5|N6>RA@|Ob zujYmt7m3)_czpI-ihZS1NN z{mBusZ?O_Oo54A_*Q29z84jB*6Wst#IvTqXn1FOd0WHRQYg4!CYPDfB?VoaEw10XJ zM*G{lAl|>>gn0kjc8K>kTL8Snq(eBCBR95iHQy_>TsDaOw3GMV`td+(amo3Y-6~SVgFExhSbYQt48O)0=vGOBz@93V1J{b z%hnjMkz5Lb^ba^Q<`P+L@G)XOzkbHOO0N0Xg0Ihy$^3ajb3G!GhUm=0X6-0?ONj*> z_f3DrB8?gdNMPm0cL=p(y+ve&>N;XLt~MwFIj|UsJns<6WB+W8-IyLPg}oO15Nn;A zXX*?`q_n+^0gs7HP%P#UtYbBYu|?p@^*>8)y$gH5q(rM|2sDE3?Nr_ z6;wk|U!eBTYxBbDj4oegyx`H4PD;~E0DDx)A+w4$lWIO__?$4^47wxdhTYj)uj=EM znyJ8s%uB-ov3ip%{vp~EGl-_rGMMKEfwnp}WIi3G1!!q)Mb=!*J@7~jy3`z6D|(ulUfoM`T~yvcgH%qlR3L>cQz}3KH_#K=7el_UiNveh$%U8? z_LGuK4xOlJQHD;H94v&y2_rh?&Qj5;yNIP~_>vbFIhO?$;xT|Nf?1iDP{&TfzW|C{ zCb@Y`IIq*W&G(5WFw0|-!FC7~@WzQ;j=+kc@=CQq%FR2Z@=-e+m0g92{YkVJKEF#;crZ%nQcFJ%ER9s%lZuHyt zzJCQXZKOUpq-8^{@!U>*5UtJX?PJ5B=GmY497K(+_9#(mFzjTf_-f`njzVGrbu~ zIo%B~2+9wdNd~?$Ckbz>{gcoZ5?p1VB{W_&eWQl99s=eyg47Eg{UFjXJqPm>4W7YD z$9-*oALJ8xuo5PzsHx8)k^U}Y)`AIEyYYQx=Stt&>pC^1 z<1Ipzi|(09mqxhhS;O1DqBDH|#e6Brh?)T?##hqzUdF1q6jPRD!uP? zbWjmu@AiW4LERk~L~lO?LlBOkXS8(lwDr(C^0>rF%Uwqug_tr@MLb@WZA&whtoIbB zE8!EYJKqhOTZ^g|%QMT``HvY}F|fSBy?KOoxP^}j7bAZUs@!njJZjWwL(^eq=6+n~ z8%LxAL!~qu?!w+=bz*cNLZC~R!u8OxQEj~wJTO)h@b)gBEo@zQDyI4YXo5}-(Ea; zYM(shM=smh)qbs|w%6;$>GU<*xxL%3UDH z0vH0D^OBr9a`sG=$rh?)7@YIo7tGXb<&x^?G`z4x$kihn?Wt54!tl=`j5ks~^J>k@Dr0)P<4=`SHK z9HqZCbCIW(RVN`J;D75Pe20ytLgS&Ts0!l`bX*&cR3jPU^U~6tO^zfhGHzeRUZ*DYv5=CgnUBb27sKfkX_*_QW8g{ZJrxy%`UQ0*MHZ%`jL5C?){`F! z&C1heYOrD0xYm%Mlg`aWz|)=J6XL61(PaYmoZu*Oee#}dZ#fyd`&CdjdPpQ^urvhm z*}68VQ1kadK;l>pC^5~>n9Trx;doyON_o9|l{4Dr69cU$EWU&B<4x-^ZkyN@g+6xh zPwMoB)w72E_{3`d-x8SCuyV~Y<7PBtbGlz8b|q|+<4fOKPHB=WR`~8S-zT@E#MIz^ z=alPCn@!+HKuGW89YXG6E7SeT?x%L$Rz`6^7@OU(bxT^EXsU2P?CnJ`_xORo0LS5ZqJMxCVbRWeo-#hK z{zFi%iIA{N#Sai5nrc7MZU}T|<(}BnT?3{T;ZumX`1pI_wN=xH1(7Hxv$bO9qbFvM z=4UX|gWc*FmBdU?L8VP}WEBU@DdV#;!@A>HA=Y*PjwWDlg|GfH5>Q(U8=Ya^l!UuA z`@jrShkPR|fU*HMN(H2f3L_iHxXfRx)nrwvq&6c~8APszz?(uMOM~~;e4-k-z`+?7 zfGGlRkkAmSbZh-=1DfW@EUpy$Y!T?8>kso)AM7dJxn-C&fjmLF2(TVpFr4e2U+g#7 z+4k*TetXy?4RKO}&ah^a69N0{Pzn%X8X;zvwD}fTRfDp#XjmKaqHNo}UcvD?D4zpu zpg)quKs{n;XPMnk&6ayDlWEX8k|(r56^l4OXTtD$NJe@v5fJxV4@4v5kU@+YF81KM zB`3Ckcdb1#4>KC1$+)+jS|{?MNO*>ms=Mx+CI?BKk~GjUN$;IXX{4>cn`P*Fl-e82 z)6I{U{cqygw40B6gQ97V*DIRULB6*KLPT`CR2Q|GilRB@t|Z3gvZLw#C-?I9 zy!hb|Fjj~seB&a|1(KNJ>wxs3916gZ*He~34@x1F)sNqi(l*9MHd0)QHWXaHyE(K7 z7cKZ-J*L4?vm!Z3S1w#G4ti~Cddo)5wN>F(8-aiB*r&s{6%BN!A zfXYqSk3jA<$0DOjjri6<$##L%7TK|6qVIW0hR0*(fg#o6fLB0H$oz`;1a}}DIS=m zbyp1H(H}*@XgRD90l;D@8c^gVE|w&ON1VYZKqwZG5%G1S)>4fd>}E_8%j0} z>CWmY4@fF`)8Fw6=$}2#(#%l{FRR_s*mX%Ry$HHIkK6B%!5A!-uyP}Uc?5jE0|so# zJYf39QTYezJ;eLe`Rl1hBpc|f(m|4R>6nc&+U%5MHUVSI^MY5$rR0aBG=BCa?{*tv z8T?`Y(3M|9)vn`N-fV}=sLpm8aiki6a}XqLIP~HXQxETrC1SUhA1v?k|2gmVR&_R2s(seFN2Y%r46JqWZi{zMzO@6d9I)pcW^+TATpWS22)!K7 z{@c%I{Tj3rhq(T^vsRbu&Ze%9K%2Jx;;cHVUtnV^eewPNOqD#*TeOfPRjbx2AAHc} zt-4#2+gs(Qnd`dLr*F8*$-Dx&zg#^>Qus?OAzM6)zDVOgj)gmgIpO%m1%Wz|)Je^w zE56KO{+Rh8zqjowkH|kGk|#&d2je}T?ZiXYJha&VyO4V8#=E9bh(Tco8rT zPe-~LXJF3m-dlc?;6F}7;88&8_{fAd=8#U#frP4_L49h#jzVGc!5lN~#ic3g6~oWV zv^sIRNviD2sp=g0o*CI#Z^KCv z#FxvQ-B_rBq7Gjt0mKsW!!`BC6$k3Nbv~=i32Sh;2_&#wx~G` z(eO_m^%*b>b$6$%N#e-yrUExgrg)Xbt1_?iT*?_%W<73Jkye1Kq|hQGIg_l`b~tzn z`?hTr4-{}gX!g?+=y~FiGlIKtQ3(zuiP@z5*mQMqJp{b_?lasFliFvhEL3A?EU$@}>?(xy?0}JwQH8W)@ zgM%@G>PXH-ueM<_`@adULW)`<8U01d5R+zQxRm%!F$xyv|chrOou44}{FQ zu6YqRf~q96u+ODLO0G^H%4Fs2B8k-be>oiK3g$C0AW6*^ms%)ZC=G0PHVrTJK#p08 zLXKYE*x7xsPgH(6W4>d;@{V2knw5LvDa+k`?zu!b?IaU>6Z`Pq6UTXDmMjv=q=0+& zbV0gTGkOq6NxG|T!|+7LG~A?B1pV4nGi0U@Nzx9T^F)#<4HAstN!zTAE&*ige(75b zE&EHBUNV4MV+@np3f(yUgLS?vS?RQ1T-jfytki+QU-&E97h_7L+8iXKTrxUZSLO`W zV$?#Q?RP!b+FLOvP6MA=R(dp(9y_!AD3@k>PN&3w;8lV1W+;Df)|ucTc-JF?m*BR~ zOsPF17R8HHWkv%j8E+8z^ns8d>p9D}&pP2~Dkoz~<@M#QkC?n$ z&e?ks$b<$?W~FX=nO!(W5x+0$ryG2dx-rUj?F|2CK-5Y)v02RT)wWJ`+B%|S>gH%j ztfKJtZwjIKzq@q2O_0W5goIMejlWX#_i4d8d`{b6P$HnB{fI(9u(`CzAZ=h_p7o2O zI!*lxi_iiR31c$L#i%^U6{h{zleCsq2#-&VQv#A)oq+%)VO&84x^U<84CMIggs<|k zy=BH+=Ey;ktf{G+F3hldr`GGNcZSEmemrDYNoc|SQck^RYZ`Xo=5O44Zl=_nqJ53m z?jA^dWvppdl~<{u*c`_{q0Ag3%_vJcw7Cau9bggfCgx23cwR=Xk^w6xrQHLW>mJ6~ zoLc6EiL#W%j~X5^KVItxMGgd}D4^Y)9{5DysmOKYi5BuUui;d}nD6_L6YasFOjC}# zHczo(ZSUG->j%o24td8i_|W>9e3D++Qxe`w@T9$cDvUBrFU6PyDH+cIXb67yo5J#3 zG40794Me%jg^c&;B&HbEF_T9x&XsSefG`7I4C>qZhx=cAaV){D41BBnVE){<2L>v7 z@O+e}#wYA`9CLORgK8)rap0>`tBHC{KGDrK|BkwuzlaI=96JbeGJ_Pwi(vS%g;$GU z{Zx5S_h+a9Wo0lHhxZH-?es7(>U}TAl)Q~QXj^ng`9!-l)?P)w#v|is_sESpWZ=t+AIf!#G5rs&Syz>JIdC**R%{28T7 z3V@q>j&C4r)}lPRp4ColvW%S&W~ir4e=5v=&{fKhhgb93U!Md&2bOjoJ19Yb8HK3L zy4q61UjHC7w>>t}Ha#-tZtH%1W3Rmx2ar!UlUNLfmEdH$tN}_H)_jlNOi-NOoqi9^ zg{k`SIGQU_MC|n7T(8vT(ya@_ty9AnT&F$vRoQmT4Nc^QnjT{!Vf(8~JI_I`92Py) zsKlD7l)2VxfdNW{PJnQm=uIU-Qee^9h&$N%C=>g=hc&|xSDL-sJ+%mnhFKt;XD#Gj z2zE4q&{%)2*@^mvO4vZ|*FE@S$1}z1{Oo{4vd%e)yV|NLF_6$95=Yw_z4vQ4lC3tBMDGfINUylPM{vLdC8$PvGww3M z#7!FCN}^#}-qt^>V~yZ$FrFzti)i5lP8Wc{b)L^3ngy~Q{tIn0A4raVvcVtQ$}w_8 z{3pGv*4Hunp5VvTf00XaophUX0ZP&+jLmekkfXZY#_;M=VNVsAyL*H&%BP~bR*Q}dWg0oT^8Hb z+8?1G&z0BSPn^-$hiXOPI+G&__cnoUIy{k1=Mc@&b;oJ3rj6kk$$N!*-WU(H*D=bT zr0V|Tqw7^x$?|Od3@g!L!cOqQSF7ZW$!NRFDNm;|d2K~(*`%*Q*3~y3q@}A_QE>1T z_6D(LLad5BIEtTzyE_8L9|e!)^p^N1XG>BwZkhJX2IjpB!BjvAu5P?4wikmTJr-d# ze~F%~qM?I`uv&gYSC`RHUPM?eSZ1ec==@HA#jy~*aWwx=5(dFZKo$AuQ_>Rp!25mj zSZFWpKHMx~mgDF1I61Y+^zJP>M|=fW1(A{|-QHr~ANxVa>i9KBlioZk*_GScI>eu& z1|bw(XKH?{PY2&7|BF?JPV1t%IM>@CuK1MYhZAS<3|$8;R~lD;C|B%GHu9HNvEw0;77(X?22w1IM z%aiOB(=+-KA2<0vs~0Nfhj)MhXFr;#l`0{U>G=9ec~qi63stjc&eM9u(Mj>TmCs)n zqy~jI(kAj;bc_&x@JKEnS@BxtC^T6o>twE#!UOw>4wdD*?dko{h9uAd6M2~^-V^XtQB8iDT>SuRV5`lF@KVqR6BpM!C7IOSK==Vpw&g(pxj3)fUkzqW=b~T@qFwtEZ zW+hV>@`(tZVIO~PD)HCr*ovK<9kXxHykgqU{en1fN;#jwg4p7qn!+cTEpyI5hH}vG z>x6~8sZ_AKr9oJMqy|Y0(OfufU3-I1W($>IBOJ=s6IioUUS_%(HTTpfCmY%9#O%-* z7Wh}nGS9alcExi=;#_~8?TAqrbG4o*nahwsLFg1}QWPF4TIl>4u;pQqh|II-98+uo z(Uzi8j9bgxoMgNzDV@owyPUubP~^g*#Jxy#7^83fyfvKkIEl$Fgu-3GXv3c-G_7y!TzN53|0z0QrgQ7caCIUODsHrJxMO^Wb*kGR?`kWpC;A=J&>1(h7!{7l6brcI(kLf%V{TT2<75-6 z8&zYT427ft`=>CKA>vVv&c z>9c-_$@t1_qhpRP6z0#+ww!e6an%ezStolEC*FwaLF8jo@%>hTO&IniscS@-4Xk^{ zrtKJ5&7a4q|Ll#BJS?d+UDhcz~oPM2|KSxUs4*+p8fP(ywu!Bkt8%c6sw78 zWyNMQf4$PiP-wJBw)J zFrI&zxy$w&L>{f?;zPdE1W50pp&X*=#w>q9Fo{|y964+OygHpN!b_)=H+o!D;6hCIj zaWcvUbE@H&Wtj%YJiK-AP$vs@i<*4hd0{uunqN#iOC>hj6>gO$NE&}#blRdD+`i|#RqLfDYEs|E;WZS(Jd4JuKXL$d|7$*@si*w5&^NgZ;jfd9P&&PAfyK0 z@-#u^rMW!<3dHgDRD+nfKzz(tB&HQ<8g4F2+(~@yQiKAa_dwrJf`{u|5QPP|UW&x-B%aYvU?T(iBW85A*9V0nld}B|2ByRyeWvN&^j9@JKZ@!Qbsb8_^ zONlcJ=M0REj)N6&mU~$eu?2^f;T}P5TkRP+t4-So4XIQpAtJu020vP`T?2z@1x3Vd zvJ1qX!amg}mWG+-dq>E0of@wos@EzJey05Ent8dE>tKl|t3mre*_a~%{M0D|w-9f} zC?w+bfEz#g9_ATATsZS!`bnjtFS^eH6s zdY{~Fa>v+oy@j+DD2O^9u(yLph#W_UVr5pQccN(|L%vTj^!N}UkkH#>=UUua>^w(f zJbJADK(RUlt4b}v)x_UlVCbm>IDnyO(zDGhZ+jkL3o0&`h0 z@{No_wWBu{*EDzEFzZK`(=~~~dX2&bK`()oMNe|h|4Dlo1x#xHR(r?t-E^1H#SqLUK8XTlHbx)yx-zJV%;W zKH0>$zqd^jvt0{Zv#3t^*dDNRu~*%VWSum|q z51|7P!|^AB8yP?XE}H1sStdAo3W_XgHx(MPwWI3&GkMs-JB@+sRef+T-$|bg0qg$@ zcvks%*4}As_(r{2#p-68|I7JkSlVNUnAGeZE@BMm>Ov~4d?vr*k9=pVw`DKNYshuG z{&rknNQbtbo??Qa3K@Uo4zmWL7IK@zzE~4tS9XEc*vZt)r;Y|JJv<;-Pq|0 z%OO{|+~4Q~2Y_nK%zLWsoY`7QB;R_zdr#gJaIYRa=XjEGnV2kj4}%4b7WKja_3cjMco6HoZV~yG2pj)qF`7L zVJc{QADVF*X?0cOT;3WMsv=DOy3n*h`BatGSlLolhrUJwXZBrl<;2|=MZwM#05d?$ zzq2)~RxsboSgg_(FUIe6>$S#fx_X73LiM~S2ib$bO1gL%8=}nT-y8|%NqY0{0f5ps z`ihbDjgrz?{)Wz#?J;z;zqWa=h_}v~Uwwh0e6)CN<68v4cmhg&di-qj$o@o|*H)MN zhH~@QV{>G4ak_TpTan|pCJ~N~V4rVQwtu+3Z0kPcpe!WQvt4J6;&li^~|lB(=48NU`r2 z$5ptqRbX95wQEDI>V|^m?Dw++2AZ+`PnhjdQ-wp7;&+p8j}{AOe&HW^M>tULnR|Ok zuD>oM_4^m!6*k2o77=|29Aq>saUVY9U>1M`Y;3hvO+r$Wxlm;ShBD?sjWJS$x#CFt zalGMd2ttrizow=n(pRG;iN|8%w`f9%viT0fnpPY@C_nri9kzc)_XwUrm{EN^M?~~8 z9KsqptPf>CkY>~*A_I*VIO4tc$c;w&m!_F!^Xs=YV7%&ksTIJ23`_L&b#~lbrq5XC zwJVsP@(gweY7>RvwgO%>J>JhSGf$I)DB$V(zS=M?Nr#PQOVRaGpb^N&Z?Kz!PpG`j zY2z{z2Er-Wh6fb0NAky>3RpbR633Wj$86{78f~M+Q_WnU=k|wC%-kU%`fqsdB*QBV z7l{ai1U_VJ?Zx0LjOU$ViklGOPDxDz7Q{@2g^ zTzoYk-lO!p*rq7Q`jeoGlGu3*@oJ@Ulo@R(vh4SO=F>b}N0A8?-ZIw*>G5P#o*45` zoR=`K^ynmrr?zg-4U}@Yt^%@cxh{CkoMm5 zoPXV&&8X3vA}~MBUNYsjSVrfKEPHdn=5k+U5I|P0`W2GF@sfF;XNZy%{u&bu&Q8i- z=V|l^j+gs)0&%@NSlY-OMMQ(3T%oOEF&Z96qmn4Lq!5jYQghe9lB!h2%iZ)m8(i9n zQU3Xn0y1<|34=SAp9^4;)!bVf2iYvJ>OpJ1qf4XeVnl2s<6=0?EM1vtT&$b1{(Ngg ziP`1QcuaAAau(eR)Xs)Je2aR_jJpp)irmA=VV~$?#P>g8-w^PChhYw9GrTaM=nm53 zC<$un+#*J`K`QNg-=oW9v|YuSD_BV8lzPB(|Jl~}3*`%1sRC2!;!GV6;0|>541kSrttz3llsEV32psoEb>y#`{&)#REmCm={YP3 zkS~Izr@rF*wXZJjgaYCHsz`u-g(1b@h09>l*8)ZPyAQk=cp3W?_!Lk1+m;~P8*K!4 z0ZFiI>Zi2PkyUz~diHB7y()Zd<(bL?Dhn<@{q^^L<@~-4$mL_}__@FWXmHolKV{8X zmtDCkNPNtjG0*go`N(BIsa87)*ry2&G7*|kQC5h&l5AHtZ5%aE5u`I4Cj;AF{i3TJ zcoP!fEU41C8?#|4RP34arDaw7u5&RktJ~QYgl2R(7ZZT|fW!VA{8YQHd(t7WicG+# z(LnD{Opce;bjQ6R$qxFtUgJz5bgkxTAoiq|Uby)>LlXGRQts9Xg1wpWOPu`;5H@|AnueaE;&Yr*p!z}53qVrc-7QXPLS&p48sckL6*~l23wsvl+#eZ@qD?{k}E!>@*~j(GCw3uZe+c6>cFUF(NmvF zC7+C~{t{)_o_?MERiAN})$tgb3cTL4+0ux5*#%N=;LyJ;H-rU?%dzP961Dfy#l=2g z7sV9@3e7L;bw(0rhldkSXDLwUl}hx5Tq#%^zXWR_Rz@Q6=mT7I_Se|Ta?%1L^4NDp zU9)or6R3XU9B02{=iu1H`}AmFc}s^F;7ukNi;7i&ih z)Bjxo@;ow7%fz+n`CL9A&@#?$i4;Th0(zq zq4@P%1npcbS*gTbO0&BD8R^ft-;ju`#KWw9ySA545D}A}9Ns}CKAj7;@tFi&)#MX0 zP?>BsaJb-4lf%)F2=;+n%78RaK%c^)5i9`50Me|Ahl4GHEE$u}8Xyn}nlhj}i8BndXM!{V9@ULn(5BO=r$<`sYbb4v3~;t~tLvr= za%ox-M$LVSxQl5z$uH~snh+g~V|q}Z#dTK2Q8`78(k3U&FYF74k#^;r@~!y%rO(}G_EA+zTka?F#8vv(l>5w`m)5p>zc?}JARmg2a;0vX@8X)$ zxrGwVeI2^a3I#e75dbX2(7D|AHX2wrq@S+utY)mi8fBX&1q}yIO&OsTGH`r?G}-iU zHU*Hj0#KEWC4DbARw|3e#iG>jy*FKP&EG4~32 zmoC^Zo2~LJm+tb7QgYY%8DF{mc~wIt63q`c`uX!V5sy>UWxeE81)SF@eNm%^c75VZ*KB>B;`2 z;ddS|3p!af%~7->3c!l$pDPw;A`&Gk9-}fE0qJzh^_pOfN2QS6w51KeW;$q2Gwc>K z#ui=$hJHLy5Ccv6zghsx1S)re`Nq%I(vb2=FrXH2AtGRbP*dgt3ry$(6*dbBHmpzF z)DwFHCb+zC5sVNNXL5^sPFcLNv>-LCj}*in zB%n`#2xa~aM{dQ&bC}^Iii}(a?`ivB<3!fj+0pGkwBNo3JMsYP=y%-A>orw^cxry` zw9KZ~+_i?Pr}WmHpFW3q)2ZL~;3*u^Zz*gl-tLh|@GTvdJNwA=0|P7Be32N^D_f*juK7AWtCz#4>hE>(_0DNNN*N>a1aA&IDhdw9bkWyB#<|~n11hB zccL`+tIBq9mMF%!i3+ z7PVFGOz=o-eeG5ewfKU|_u7UZRra6A9V$XI{cMyD z6jD%T>j}|h1Ft6zzWU8PYR1716h*Dx5hTjS2M1bZcwGy(MXMlwbkF7HBmQnTJ*tKi<85{MeCN8$Q(z-qr#~Oz!UG+tI~i0b9dl{Z0yvB||xj zSfxDrQSI$sY5BX_?~8CORUpWb6c-C0RKtn(ev$1}t}+)WCwF|-FPf`DGZX;A>ao}8 z=Sm1HyL1Zb9^CP)S7%I4B=R6z$X4V04t(CenRdWvFj$>f{tW5tn$OTY+iH$z=lPtr z8Hs8z(9U~uOipdHt>#->Odj?#Q?Vpj2!j##rSZy$6MhZfhoyg#kxQPix~=gT-67Rc zMJU*dnv;ve*-$zrf0y}tug1L7tTc1QlZk~_Ofx}@Hic3R5ovZU6*mP_5IUbsu`{i( zWd@q@?zuf)s*8!Q8KT9eG|RKUGzP*?L*MCAe%z3Zg-%N_D`O-kGnP%U{MPApJUXQ! z6v^u>OgO2=!ar*yf>Yt8mk!+9#p4YSJoDfdZ?`D-Lm?uLxs_J(rRaWjcjl(l~; zK?+iH{>VLBM7RoSIUI4S@8WhIf6qhQZf^tPol8<4GKO~FDaOszF=U)$eMFfuYdkqW zz+DbI#5nz-fBL#YQYm=$%cDC;(`mGQd(AgAp3TY^G|!J)7Q_n--a2QRRtGJ8K)4{? zp&DP;fJ#t$7p1e0`iG5`SUZ;~VMI#JKc$bHToof&lELh9>6+(v@NK@y&Hh32(2g=( zsSVvd5#}~IYKcssUrw z(x6waKfH!3`oiD<_5Zy0<6z!{&xf)jL%o2P%Lo|7Lh768S0_TN!+x`?g3bM7;bIK{ z6Vm?g+BJTCVDQyJ)=e?_>fj3~(wvuFsXmya5;| z*x|VcAa9N&-KDBKX7XU7%%a%*bg{X~pGvPJ-}~dLNFV;?TIB!)5=)iC)QW?#9M5Y5 zz$*|;0d4KA6yD$OQZgQ-<*qUGEUuZslsAo76}LL=}fX=+YRK2vu_!3iu+bq88_~6K6d23g`7+NXELRGw=j@D~xdDR;< zSpN0LOT*?Y4Kwiy?nVFt`{lej7~*hC>vfK=u+_JN3zv-9agadwoS08RcK&%sH1PV6 z%ii8DEN!`?BSa!z%+aHV0XS@=QCjt-G4=C;tI$J~uAk^!t2A#)+^CG`?VgGcm8PJD z9h3cJL^kJWTc*5x8kyHj(HvdXR``B_E{4}Sw&@Ox#uCibFnTHl7##W;6`Dv`*DQd~ zzt1>$l zy`tr!xYPUpkWSf{f5Sj7i_}-tF$F}i2YMV^5W%qGTd++fR^~PAav?M(Rhe?D4Rhk4 zHzj$00OwBGN+>_2Zdq-K9wJl|`a_LPZF2iA1n!vKw0mMxPE?E?>|H7uedv-Kc3`Tc znERrYG3s7Oo#pO}({__iZ|+swhCx#{SD8=QiDe60DB8|K5d-C-&7B^FbZ;?Y&#M($ zNP_3Qd(pu4q<+gzfPGdS%Zu5$0B^FA6+DYRBgg%sZ>sR_zEnm;BJUd|H}5m9tk*8} zC_fdxX19`qisj~A-_rG9A@!WVvHZZlyfGzJ@APp@I_R9IsL!~3k_7ueI4AQLE3Wlc zsJ2%gb=#nVoiKlk3(I{VD^xFu?on>(6QJU35bBa=XfzR!b_H+p_jZ;uafnByQ$ZFzeFCn{3?&FTXjn(nbO86K)<>eWp)YTN2fr4;#I; zuOdnA*$U}^3y!5y|wZ%gt2Spw?1r~Xs#>Bj<$lV% zOegfQxuQPduw&@N;gU{38I`@@s_{4=;TOt_ihJyWm3kCn_5?TuUw8;s;?(fd+}bD} zSR!4{l&r*?O*VJ_ETm@WXJ(YsE6toKRI1fV8&wE&J`FACU3z^38-{PADv@nR2gSA@ zmNAJ_%^i$9yRo{v+qLC~{I@2mg%vs%mzhz6dhtl@;cB|QY#OF&{<%y6?i>x+MlAdP z!SMKxVdz<^A}37CtcJ<7rLtm5aC`Q=mo}}{tLCH*Xp`pAT@$~J5N)ar{YBC}t_#wB zlImumyV?Xsb{vY|>W4+UU`1DHZWeWT;5Z>iR$1piKQ~KW_7y9eTQawn-6dbFZFl6l zbHiG->gi2dKiqcWY@V}|IitB|q=-+-49|NU`Le1kvnM&LFB^Ro01Z@q<;)xF%I7xO z-d5{+!?gc)RT8;d;?ZPO9xPvV>Q>6_qvS=+D?%1Jfq3HKVUJlZOf-#h-B8Oh@*)wf zp>D75YFjB-bJh_xG>!EE+aSp_bLCUYHr>IiqVf!TnJ5J;iECG?hY&ZGs*@ zMqi^@Gv{UkUbjpVm1gT^CmIz%)EFjBH@8MGdxDJTl@dp%im_D4Ld4O|(=V?dX1LXQ zabx&hE=(>-5wdPx9=)X5(pRBtl-4Ni5NH~T-D9L7$ejA?u6*K(CD=bDz|dU%gf`t3 zQO3ZuZYsH%Fu(%jvnLp<87GR3j?-7JXvC@GpFR5k?!}!!NfITQtWVex=oEq$Qbdv_)@$k~&IuRwktnFF{qbwn&9`6Nb>Uc41%a?M zgG${LZ>@pdbjP58^&MamShIiV3+(fVYy{dbgx)RP)TyehuE7}!6jVYZ%RegiAp?{fle zrZ~A&f3U?pW+7v@D4I(fNcW2BgHx@`=twsqOz=~`E=0rvH0O&X{@H$A%i7trVZ2A_ z0-AHLX$VU&kiqv@&@*~q_hy|-?`nyJ1?Y7xt?`{TNyhP**=B8&I%%g8dVJT|pQ!OT)J~x!odB)G@6&^!F&Xx#i;#~kuQXG?@y9`0` z8jmoU@C*%0W|Oo=J$eg_#%Ba)iUY57W}7z`OL!oVThJ2as~-$ZUM^d+rqr!I^IFjX zWBVC5Xt}pViP5L?6Ps)lU5J|-On4|x5|JRH{|v!INPmIG^6cHduk;ZDTpT-w*`2b=}lq&|5&VzP9gpLxa=Pdj-IB)8~jZ0xqAXJQ<(_Q1Ei` z&6%0u5p%gQxx6o&7S&E2IIwkfqP;HDzf-DTa)fHDUASDWrJ7-OUX|n{3@uxM!@ zW_&@H(PqGBU3px^=npz&)a3oneUBfD$JMVB=SHsCO|dRb7o{ys+C!t{MTlnUx~#vf zb?xF@Q79BkjoXBvQfjTMxl;QQ$B)tPFSYPn%>=h~4pdKK4y21jI}=0Lw_^g0MZ1>0 zMaEQ9al_sGXftG#+bw$q{AO5i7R1BwHm9v<4_%_U+g77UVKY3f)!YDfnbb-^Sf=9X zzUTJMO~iU+Qp!wX1*0>fkuR76^az-TxMX^$BA58{Kh%H&A7|P+L|>&H(ZW!uzBj$C z!e7~-%Tr?&eZCc;mcswvsPxK}{4kIt`JFHVrJ!^ByWpEmM2C~*PgS#&h!5i+1eBY&9lSe`3@5A=D2})4dQ=Lbi7ELpiQ@aGf`O>dG~-{rIee z9&s}0(W>Ca(zF2gRl|+DEbGjMZCmj6<=#PJ)7>Vh$6hE6ad&nj>*K!(9`EXsj{E;E(NN#n zqq}mP(>xZHN;%~eYdXK62QEvGuyRNb#S zGVo+VAqX@L`QWZD3X+OWkpnnSEM~p>rxKihGE`|+4RwpLb$8_IQ< zXVLJ&lFU1%8B25DCl6kvrxKufD}x$0RaH-&sQW^h_|UfME3G87B~QCKWo*@@Dv{b_ zK&puaMu`OVV>T3LX9e_4RexXEelcc*rgptnyEP4o5c4fo4V&CB9gi5nAQvfLMDcsQ z^VG9qF&i0{BT;b8BYvnDRc3XEhGa-0g&L$J zwlZr`49qW!tK8Hd13py~UzBx+xJKWsC_4{hGpMNf*5q8{KjbHZJNA z^jbTY%}}r_Ptz%g(^#edwhcZ=ca_8*&Y? zl{cCt)2II&xO<)-uML|M;dle8ZJ`~f2E8$F(2}$CX@l``6R_kU5=z#}+)tXXCsrYe znIg9musw++6$%Z}mo$XJ_)Al|E9#NL$|hRc+nIxrC#2?vrCE*+;Lu*%7Pkduz6Aoz z=6?VG_kH4)EQP{&Cn9sBZ{MzDvB&+fAEV#BeS0nl=WFQ5$W%&MJ7#9;mhXj**J`Ir zR+6|Jyh86Q(e`S^+yNbNO|Dl=uOgcpW%Vze*S5RgyIE$L{fzW@ccMx4@;YnlkxA?5 zaW003$Fc~VWK36SZSMTIvt1ql$(QxQ$NOCkX3yfdDS|@b>U(Um*1NaC9boQ^vC3-J zexu%o-s!J9#DP10tv9j7EqX!0@7UK^!6&TF4s>Fljo2K6S5MV0n9Cm|0Q3e&Q!rA= znpX9Z$)8+E81nn+%5I`6XaO5-DT|>j8V0%P3hEr&E5R&YWX(0Rh&Q}B338(XS`fzLR;O0^i zd>Hn<8c&)sFK*C4k~U4@vH;Ce=+&!2e5nwaToqMrp`;65!)&i}-NFU5JrG-atd}08 zK?AM@KeF)*dP-jqQZ@nvt^QL%gXO>D3BQc`kD#^uZ_*#iOk;S?;n2L=z$7UxKT4FBS~l*jqV5r3fL zc?yV&`?|@ewX^2-Wh-^gXstuOJjO5YEOQBWd8of5@oLxDN$2purs%J=pL_ArjuQT~ z`pGQWzw#ySrGw631ydqhJG9;XUw&X4AwKL~`rM8aD$d$;T{udabsN{W56yK?!3~Mk z4%MMZK8T74XzxsGaW`k;61Y+_7WOR4s*$=FT3yC`ppYc2Lt3S*wviCb!H35qsum>>o?g+x^38-2Cux#N_m_E3sN z0tqF7xNdRLU5MqF$v(gd`g-)XXqjy=ke8ct%L6}x@&+Ke05ej2PWVuP&-WV7*Xz-^YdpaeNVp4 zS347URKFp(y4dzcf?Euw`K@p14Q!Q&zAE|}u&1=ZO9lazgiD9wRd%-AyvB^#t4>)o zn zTIh5Ujl*cs#>u;pQp2VJM{vf&6*oV2Nj_6aiBDkj?Gq;%?$-RYrP1murR10)yKlB$jpRoq* zU7O+1_k{A7X`)3)%S6uynj4a-7SL)p zY{A_GL;yC~rxz{!hK~Zb)WIvKeOgsCpI)x#cu%$6yq%wB#r)V&9!U5b6c7uI!s=B! zB1wDqDUsYUg#?XSz_9olF7?xcD{h2wDDc&ny!|Y+GD2sBK(aaW{CO3T&3Tvuj8CNjN6N2 zc^<8pBeum+YM(Y_a(^QMr^u1Bg5DHL?aMT55*qSP76$I$#wd9XhZgTn_04@GZH^3E znglJ&eDjmkh${UN9h6h?id^^6oQ?kIhlxNE{|n1N3fR(~3Up*`2 zijvce&z>hx^xV344M)^U?$&HBi@N=CsB!yR$aWt@D4j$@85l>8CgVft*s;SQ5ux&v zuRW5-qk1%jf{J!1qa-^6yn6Hp>aAVR%!xZca8VP7<010#C z&pr(kf!0j6UhAS}@7lX}z714Y-k-Mr2U6J$%r9TLNgk@iro>GrLVqrvwAd_Anl0%1 zNXlv{{r)9TfBC(>^h9tn+sIz+UU!XPOV+D_OXveoVLr~j@2jP1&!}hW_$mEMQ~cA} zyb|tYM@Csk%p{W)s+AS^SYU_@HzktNfMc>tk=jufPq`bxkAWgW)u9_gl_#s{wq6h} z>tG`AhC9kff1(D{|A5GBWz>?bPhM<^gF2Z}8KFMxG&N-#7Wf)HTQ?+ny{83(w0{iY zX}{%0@LVcF^bQm!$DPJOmJ9`JZ{7m9kmpTCW4yrK5Wa+krveuUd*Pv0edJrHe_c_J+3K;Y0fGo2K7-^3KpC?_WFK2zB=YrOQX#|1ZRY}N$ zsjg3wbQaq1zOBrX2Esqh)oYCB=NAGx(#X}&Tlw5RR8wig^q~--1elwg97Q}g_Zmel z?@kHWkas)hZA1u-uXWbPdM8_271IRIjYHLUr-uPBp=?(Ras7yfm^#HYOSK& z`wvMb^~2LMmRw~tZiUa+5rruoQg&l_>o4?H(nG{Q-Ana{or#-gdml%+`dImrvbG{( z7p&tb<2KF1iyEl$<3+|T(cr$3H{GD2`gSx^hn7h3?N z-7f#2g>parXHTO6Xp+A#C2Zuc{Zdc36GglYx@H|9PCaBM{&in*V!%HPSi-P^+!JO5 zI@rugFRTlbeLpC5i#EQCqt8&7BKWgRe%EPME#GG`?dVxT9A|p(!G9fnHgQW#ss8N_Q1c&3xd57=V@14Ul( z;Oq|aNiyHKuw+(mm2ptbABVYXT46HV*GPgdjvGBFxMN#vS0!oI8@L~%w_{iUf@6pe z!J}wU#&NgP={AWH8DsoS@;|-{eIIF4Xopg5(CA$r`Op>xj-ym(=xp)QE=7Xv{$V{4qbf+kT65`SQT( z!ZyvE*xJEVow#eKj@8VD4<6E)84uEj`&>;30OfqZbRZDZHBUS=J|IdC=Y78387%)% z9dc1B&9C;GL0lCl^(lD;dekR|9TQ7r*scadjrLb$X}myZdUYo;Torx0UU9+a&q+K6 zK4o6kXer21DjvD?6l{8}e?ow4KMQBv`LY4j_lk?k1Ir+oK{PaH?B{SH*qzj};=~S$xWpk*YrTFKJ~fRkm`kA6J*@ z(N}Xe3Y2Hsg` zd_4%nK)XGK!B0X5uzJQ&ykzsh$u(ATY$O1^q0w5^ggB79gS0qa&ySdKa40%KHcB;6 zSuzO;!>CpsnY9ilN0f=q%y4Dq;hn8qwyJ1qlNKKx4x-X>n%%9B&MK?4XR z6VrUXNWt|*BRA29)zaX!+%fR}Xm1 zh)0bC`jGnm?+!;tk`SQRu6~VKx=N|OR5wj=Uc%_QBZ4r2r{vhfwQ+~O1RC?#%j#l_ zFq%tNZ*=in4T>4nmTeIZUgv8d7i+Y-Eo94Z+TEXj|F2#QO7z`i_A{c#-IYcf6OTsE zROZjR+n1d=Z%+j1JTn zd+6vm8?`#Qp7VM|4Fn(8W8II^OkLUcMnV0%8i zr-c?L`(fwaopm_}=js0UIS}xkC!hfcsZ1Uc`D4(y%EXaKXp!_}&7Sgy>)}~Pk7k*v z0R*+iSy#a$v~R zeX^24%(kxlnZBzNfrHfi>tqOoyp%v43|w(75S}?G)apg?N;OE`O0+b$p?Yc&Fa4;>M((f(+qN5a0fa6{?2lCvuLHUtJ~ zs?$>|(7(8KG&DIi>SSt=D-4F6OKZ8(PI2i%r5OSRluhu66AmjYKYItpG80XMn@&o9 zR`GQZ{5deuBqL;2oG;ZZDUr_&L2EFS#)4iOjE8~wMjVvio6QBl+}v)l0*m+ix|BR6 zq7j@*t-zf3jCOGVB%GV-9-qnRuVe{8>Sv@<-AIjL3V*mP=gMK7dWVl_LqBz>zeAM?E0)b*m z(-tW@b|C-yqZl(%hEkVNw2uUR%ev%$PwfoW32O$$RZzsii+!`7Q&yF){S3^1cz<&M zQOa^}ud$yq9;5$y=a4dqMi8Wo()uUXucO%AZcab&9@l#!UG*^*LMtD{)wQJ!^~{{|qje>0#VA_7t-GV0Vt=7IO_^w2S|1KGCn=&7 zIiMqlKFliD13Y7lJK7x7ntg0O;-~v1`zg0pU=VC&Sr_guH7d{#*$<^ee(Eg@iS`F% zHA>;eTJ<4O1GTx+rl($J0Z@RWFJ@}K3xQP1SdkK<1Xw00W+4cO!<}9e@|b5YYCH+E zFWSfJrGrx^O4gG#;Z|M={+0UQpTC}7#2Ib8d!Ua7GQO-kqNNQmX*UEU0pJe@7AE4U zwf@t!j*X40k61-dQ|KSSc*Zpj9>=l0*@|=`jumLC5r}r@uU|vj7K7zem7BeOK_t37 zhCmC^0leiNW{O-pQ_NwEDVnA>L($P+o!;NhiVSBkC^Ts;Yr+#e1qvfIbcC$AnegCRn?NkwemQ9q{hZ80)DRKKV55>n@+ zrF_6xec$!x3-5M?t7hpcw?AKqOMFRL_1?t$qmqSty(Mj6DiAf?M7yNXV2p=OfuA`f zBa>sjholVH6rcqddf`ip%Fh>sbg|fg9}8rHx@*{h-8b_G>|28~r~`VU8QhR8o~FUQ zVm$X6d{aD^e%QJ#Rz-f)Y+bL?@#<8df815HKiz1(<-p~CrfcD+F|np^Vcxs=+ty|2{Ww#AoH6&% zo#cyzwgikJ)APFGIg@CG*hvi-ht@)l>k0=EIZLZ=Unl@u0cII6x44LJA^Z!4lKC?+ z9iBtCzQH?K4wgx1B&ErK=cc(pgvCHGS8NR*-4R`eCMk0^@ZhL4ck!fIkTYX0{Nqgm zXA54u6v#2s$LYCGvvG4HO>^;rGg?keO=~o~A8voFukYHJ1yE)-pw)>!Y}+;oIY8agmiMNa9*?C0;5E;h zHZt=0bU-%>p5aW6&N2xd_SY96bo}-0C)BUNVo1v5@6@~jh<6gp=2vF&@wdr}H$BYT z{4PCWcnu{5WIqkMf5GmJVYAB1Ad)%YW&d!Hr;EKvkJ70OOUUK-T=0;^+mHL5gr0C3 zEfR5KgQKbmo0CAPN#e)o^I~h<*%Y~*smuj4Wl)?JMmXI8iCS${OeonAC~;6QHNP2d z87I7@!9)1R!d8j3ifO>Ls+-yplcA1kmC*3XzXVu6ap`AXI@6oLTU$`DRye7g8L|tZ zpEjfb+C53hi6{uQV+PGfmYNmYK&cfMz2Hn@A#As71>D9s->gk`+WGpOc2;8bao>Iw z+|m*+q}t6T$4O})h=stm(t^*S)}vJOojv*?LbHPePzF;5I;L%%b*y%a&;$ig1fR%r z&(EdrJEy-Frq5agd~+-oM}-f|I^f1|NcM`aXW8ji6?K547g`8XK4#|3K%L?MWfbCz zu0Te^JT~LavfwTq1(Ui=feqFWFM%nOSdLj|`ofd%rjvvjgu(Vy^JZUHZQ6_h6WNlg9F`pn0bGzs>?3HLw0ZOK&|M5DU zPKimPl{Zeo*d(cX7TUPF^a~>+90YH4G8YBWFps2b{&?jK$gEYWx3(D1 z!<21adU``7ytCf#r&HikiojIc~8C+D%CNYW3!UMh+0Xdsi zJa%p$1_QS`eLF%c*M|;d-cycTNT3ng2n@+=H5Bb2YKy3*W@TT9jMnMqPRxN}#5li# ze0*p1fWUan)K^A~Y4FG;5kt>L0VD19O>3u&F_-A{u@MHIcSe0TnJmI^0V)0=rO?PJ0vAVOUPhak5s4~M34*5kF z25O02RuL8fQ>{_BoGq=8f#?NIsMkGNodk7Ylh7DoD8 zzPfI@YFNx}*sLL!U@enFT-YvoYpfdnBm?&Bf@OHevw%+U zNRBWjHA7s0U^svMzgEe2yb+DSJl{eE#<^>v`hffK8eg-Ib!p$35ZH= z5}7G;Zk%*q^70w$Uk`XiORbbdlm;NByg~_?BxhNeLBCc$A7><$B}~vTOe5~&dmARs zotTzJbPr_fT)?GJloLIi(i>qk;>rz=9}hSpoIKo}ii>mnOkQ42-`w&=W1Po!xvcF- zEnhzAm-46a){EHM_yRk8D~DsL$RUfV1i!Yw-s%fDz8_C7(k|$ygu(YpZpJvgCa5gz z5rLK^>vQvTkX<$?3u_0KNH*~diAHfFDBFo!mU)+qkEVP3!7wP3Uf{|L*1y4G*7)n! zqpZcO4g-UdfaDhx0NmOOot^!(ktSw_&U!;}Nr}%A5Eb1#&YUEYt0*XFT+&5E=|j=< z9|0W|t=$~l^XX$>=y>)o!GlGDE;{5K{rqWO_{J-W&Yzw!e;C)M$@9{JN@+AeU~GqY z5Kiw*B<7HqHp9|Xm#W1QE}fP?(CUxm4>Si|42@W%F=%{!XE;1D$fP_A?m$ZdjhZhO z$MvEw3*)8HHSKT#$bZ+I%5UrFk#v%-aEB0KAZqEQbl_q|krJE>MX7oAwZ0-PRqgo|BCn>&`IF=Y?=7?)5<=Q#D7yDqGNhr5l|ces8J$>Q}~C`goaq;?B(t0HPdZ@otlM-AqfX#@VUglq#y zWsHU;X<;Tgvt)_3&m3ev^ZX7iX$`k*O%m?D+_2dep;STdlq9yCR!B#D=dR@7LJ z85N`5m3X>xbXYH-LD6v6GPDl}URyDKQhVzb^W8M3^|hoU-b4nq-D5+^lon2;PL zp(ocvSOQQmHb;Zou95p}Tj@NO8%~3BV^2n9QToa)l4ofo^B7W2=o7O2Zy7hzS9+Qa zUv#>;B0uVSJW_+F zhC<5xXSd1N+X}5uO%?u&Sz?xr+3NE3!%pTXIOg(K;@F{1e<)9X;eFV@x8p{La*u76dWsCAC0 z;3<~x07XE$zic`7(5?15A?1C^k-R-y@)9btnLDSgvH^s3d$6>z1M4mtq?T|Iz2YM3 zA?o4=EdIQF9Ci+?4{lBwn@bE6?KU%Y0AxOc_BM={1iR09FGv=mecTfslJU`zg93YT zOo1Jo@g$P+4GQO+;4Q?&^kJcoTaNzub94*cZc~hIGLFQb;6R~&lI|MOw~CDqzYY(N zjCe>+aKWO9$K$o$5FXMp@zCQ4CIsQ>3o`==r}2dIkaDmk(QT?&E&SMTv9|S&6XJknCMcy%W2@rdP%wEgdul!cz zeevkyGTT7sO3FwDl~dss9`+PIA%681n@s6mWE&6(nC5c8(lsyV9gs(PP7hc92rczs z1*EYX;^fJiOiBZui#@5-C{m?XGQ-G^>`gnqI*TpO>_G@HJQ>KO2~5KWF-$y0DAG#q zt@IR34uMfZFui753z0sPh|B0G^vM_P~}qobEq zrQ0l5Oo}5#*R0Y-wylJR92l8TH7-l~!I80%rumsuY;$h{jKzA1WRep%|$Mtgz z>Xr+=pZTauYs&7%qXV9JSn}5Q%GN$Inb@Zcg!Jn~;z5y>%z8 z^3vmGU7;TFwL<%I6im0bLCFC%Q-^5POQUw?oOW(4%3o!?IS^&_RtF+&ldlJfLJ~Uf zM+45QzIfJS^;%d8uD;1{8XM`_dH&`30P?~}5KCuNoE&~*P6xuc7wzHzhfi8dI^1I1 zK?i^(IYS9uox^YP70QEYqMHOIy;UmhPlW)g916w1eH_QvJjhlsxs zzRRIMb@u&1a;aLGnikCh(OuI)>sTNZU)6T+O%J?}F;*Owza|+_T<_`~#Wq-@lQQe; zoozSdrLkLV(vK&*9zm(eQ8rS$3sVd2QGM&{l&w>T>}7wI?C(l~^;=Qa)VPBkGn3IpP+HR#54sm{HY` z+mRkD9%1=qq|fB0SeqliDuv(YXIAV~ZgKgK%|}d^D44=pDbsI+P4mHNj^!aETG1E; z%18w+gU}@LiOGOh`t`J+uUxQjskjx;D#*6=jSCkq50sTIXTH*TAUTuoOfr{&8gQp5 z(IZ+dDQS+uxbwB$YU{MpYSgV6Js%ppFk+MQ@*7}oqcGrMU7Tw&lSwJMSnWmIIA)e^ zM6u4dyCpc1LsKr^Z`u`$#G4rQPG{dIe`MWotu39|N|QZdx{AG7JZ#+T$Dj;p*7UX{56pUxSdX5*+lmX{xiD172Y)8r^qOtsfs`JakDoOQx94|Zfum+8Ls zezZtV@&Kz_v2H}f%*thGFWQJGGO015Xk}l@lu>S0J&{A?_VALZ`AGj98-GQO?`Ion zey1g>LZ#y|HU7rnV|vAv3w8~GK4I%wfbk`UB}`S4+3I45lSh*7q z+hO`l8Q2kJcgc&M^(|;weL5bf!FXvPPq_skm5O+LD_)Dkv9d#P0VRZg1LnA0ds|x@ z9@udrnhD%^KuibLb#T>`9o55XyXu1r3*6Q%0o~}MTRq8ti@^1h*ru{v4Dn@&i)wLO z{w41mvtC!Fhm;x_C*nwI(|N*U>hvW_IEolaZFrT!HA2U&7A(LOnqvi2eC;=E(YKM^1`El#k zQ}QEbC`U9$-j_)}w5QbIh2(D4+Jr@t1`hn$ssHzl@?M0Sl7Qxy%a@DVJVYcuZt+M* zTgMhni6_ZJ)FzV0xF>J;a#d{z1%Moi#u59?PRq~TzJGU00Y8ZnP-B1t17 zR+L{Za&t*>4R9ORsqnewx*$Ff1j%AY>`r=>#l14Jah6z<{Y3dmuGV3S_LkZwNdFL4 zgH)oe?3}!rpC6S)$#jo=`r1deGnOa~Z%=e`N^B385_1APJ3fuNIMJ8rg!Roe5xQJDC_U?_s{tY_J-Nuwi)+f zWY`BH3AvFA+bwfZXCvY)F-@=*oP4jXFR69SX!cT+vC}QbE^8!5_)9F^g)w0jJz=Z- zj9E~}LB=d`lqDe%*8d7mP6ZWuc1||eUZutZKJf0wtU>8^+)9T=@YB7`DX_^3FP)i+ z-l}ZOlBq&7M@<==uP0j=kQyv*To%6Pj9eXS-qE8CZ7~IF59R2j!o&fVtm}T)n)zyOF+NOMiR^UwBUR5fNa=fSkCVa9152N(|@>YDi4> zO%JI&l0c6qkRajwR%$ zO>Wq5=AjE(0Ms-6Kt3n-O}y}A4gOiWEJ6fSvzK+T!b$J6YU+fqO93Djd_VvMQB)SN#!#r_D+d_kI&~iIvSZzS(4M_ivYX2bq40%5HH_M* z$^tksg4Srrsj8}+r(w65Ms@aBOk-Q2Zcf*zcyvzRM4MRH#VQd_I0ORy@W$NX!*e$t z0v3rCeE9YlhRre!e~<-Idp>cWJ{Hro9peUl!p4jv$vgDAsPKfCX;7=1yl zVD}F<8`K3jl<0sMOc_Wlt(rF{w;X`k) zw9awDr~6u`W$5Pfn!R+azh&bYS84v0w}D z2dB>*Lf_-4s)9MGaRN8iK=~Q5i-NDXC$tjK?G_&6p5gi(t6M!~9vq3pNGo2^m%7E? z>R~VSM}-qMjC$2P@HQ!V(6)!=L`dX!M$6Ch;}dq}`uZ|%M!hK|!({mL?*qB+E}bdi z2o%QKl~6Wb!?$t?jpGD+s%ZDfJc>-pKeI__E~mGcjsvS!7Y zusJ3)F4{W)=5srbLX5AK{q_nHnrrs;8QkXe^_70lKB#Ib&#-wSRLkR?ylTBoRU3f< z>157=O}yQ)t+ZSJghcUYG!J_kE8*RpAE}H2p%*%;JcBuLsRFkF{z1=w6aoc*p%r%r z2~2&v#X&v7qc#&8uiKzycKF>vbrF;+Rr+85ANEn+GiKgDpXB0|8&bDimk2NgQpNxn ze+{HkULf-<_n7Ne(RYR1SE3so6@q`V?lR(FK?xt_cBx0HJUI&wlgc!1SUaIVy9165W~)bEVdWK?t&E>anro9=REA^l2S{WD}o3I-yMc) zHONyJ~x~)-!6B6-+T3?r`y=Z8V zO!akq*TxVy`3(ue*5q20roz;H@kvO+I>w7{OMSbH3d~_IE!AtI^LSQqFvJ4Fa>~ws zOhb@g;DiViL=ZM;Cg{79Q>AfzaNnr%J(?J}els|}5TWs2c#c!wp<}+N)i_mc5wZ7W zemAhVwjT7ER#jTZI`nqNuM6Z`ZRtLRzY~Bz(+$xG;BXs#^j`+y`4DGI214ERq58vL z3MK1bq-Q<%Noag7-KE5Z^8Qv1UNPj8x-bbMdy|$ohJ$T}bI>`+59*tyv-HtI;PvcI zo|H+!6L5#jX?qG?N~|F25cWDvxT>YndE_OD#dU_~)dm2+`bXvj&Hq-`fuRDm3+B=R zYXWOLZz&qidpsRa@kdJ6rJ;C3PHHnP%c>iy@9_{QpEUqGU2?+IsT<#j` zWPWZHu#qxyaxzb1yEcMbmQ;b((h5=-535UK%USd1ii`NKG-F+nKC~31jRuTxdElq! zfocYDIvNB=U9Vcu=-9|45-b$pGVH3D>%Bu-UOz|o_*Q1(?DprNv9bjF7brsO;7Mik{3{fR zIjt7%It@V#4hzHeobL+%ymqLi)X+54QbM;#AlG{5(X)B%eE)bGzOJ0squW0&_+)V&)k&ZlVcwHls)yDF-7GhRwz{SlA71SeGBHRa#K0Baw`(tc>suBaw4;>+a^8 zyE`uH>D?LzyZSD4ir1++>Pr?$R3{gKHkcZf%5688(jxLY?;7mlzHc#ftUNg=wW9_cFMZljE zbDsz__PRp@cT8%1DH*Z(;yfsZo>_26cjDdiSBqYf{YXrVEem$b+i-;W#F0P&cizO% zpK!&@xt&$|OSqT7p*}I|w}A1)Ov}EhX5s`eaEZ{)j+Yxf)L-k2@t+|J2|508##_3& z!N#qw`E-OWV_Xf@2|(3x@m;c#;6p)5w6Ac@P+@O;9(k#3PTuN~dk;p2^C~m5M$q`n zcuap(cA~Vz<#{E6V7!wZG^fW|(pzO%7JafdOZ-X&%c+Es63hSqUL!oo zoyiE#N#9>D?yfR3EkLnsvow~=`(VoKP~trS=1V3$E-C5F)tp#%Osa^*X0dPC3!RHX zM_t~ojTX`?0`iOI*n&`bxX?+CZmCva=4&l}Q;fxA(Craq{Q}ryRkxQe+Goa>C*2@1 zPKy2YtuRm_^Z*E<&aZ-pNR{oVT}WoI5}prRv|7S=%N^py1zaw|Ad%pJy(^+zUlueI zVwk2+cCQ-$f{KzOyRP=Jh{bjxf^5tLEYx^B>>5N9cu7tIEk+Z9>}4!3iCk@h-qU2X zP+3&RXfPER%PaAAh7A(j2^#CyZFwKZ=7^+l2SZ#n&oRS1XbWI3xcA+g0SYCJwuqw z0lq`Ao}SV699L>VoU*kH+D~c2?VpULl4)!(2N*|mV?75{qY12aHJv=!gz<&?Cryez zBL$AD4emjwM2Hrm!{oMw5TYsQZG$4moADV~ArKBN>X*)(VZKrxm8ycdnP08+k$ovU z%{w*|#qZFcvM7#@Z#veL{Bc8G{rSh0?Wy~%+qLPfK|PLo`5I5}2V%+zg=B<&_{zoG z+xxbS*Y0R~mu@dgewfFq#iV*u=qyTtrb;6+#jV5h5NQkH|5|=uqI+Yzj2>NY2bN+| zI`nor>!afKKV?4&bXr~3xZl;F-)GgTO=}M778E9qdU~I6vmfOp!&O69Tv^`QyJd6r zwuU!pcB145xvW~3WbX(X6cL|PsTNk|tWnHEjvORy1jLMMz-bKKceKX81rj6k=C3;s z&G^iV$q6NS%SRurI6yTzd2uPUsH}YAjI2)G=RN(j#_Yx2Le_!BUR?gEQ~5Yu2LkK$ zs$H5td%U1>SNXN_(p!Hm?71sf4;Z9z*(qK!)%f52$1TXr8%s-|6fkEriA>VG?j}$9 zvQtpJWbNProyDFlZL$@B1;;-3xZU%Bhi>e68_H36S>?2j0Ak@B;)!{tLlRM%2%FBw z`auBC8Ivgpn2$os>qKBYV3LUJnZef>v$3-91?j*3H=fA{k-H^kBBfc07Lyf?`#!dk z+0dv*UEEZC>R@OSr8JmDa98lcwx9A-gh3Sj zPVeG{tq5mo-YMS6?BXV>ie#Ap47xQ7xHPSQA2fbzEiy~0qEPxGWkKaZ_zYE#=I?FR%$ z`X}qka2xh9=8he`O2Zg!>S6}k_RZB{TkkUOvE@H&OK|}lr?Mf8h(Ik~SvfcNDxH>Z zFz|tqX~j*_Y~(%l-@5#^wC$?DrIPl(DCsw6sl2~mtKY|&#{^g9*rTM=E-w3x3XBeL z&D$R6Yov?=pRNn;BM+?e`1rwNT?Rnl`2+5kl8tc#i*K597G11%OOC*4UDHDqD;=6k zHr5L*?Jp-&qRZ%eR;uAfBX9-Argcvy;pJx@^m>V@b@JeJlB#%ROq4E)sCM3S+)ZZh z(Vsvs(E-}a6UbJ? zi)t=*-PZ9{NTKsE!OCsNmDboQGZLu0htOgNbTfdX+Q}&4&m=}8vBXe=XnIucAv-Yc~5wEt#<(A_qRo#V9!r3PQ(T_+p zvDb$fg~Kxb)%*&vb!|;U&7}tCp>S;~S<9`fi_$p`0m5Iqo$}%pN)cPc^YgkcIkeX% z^WiLVfJnG$--9^Gg`n?Y!p+vm-x-%%zfK;QZnOS8jze;IOttTF`ARb4c4HV6{^UM* z%?bRR?$#0HN*;nEb>pN5w>oZFlNOzreHv`^dcxDLwCP@1JD#@Wv3j)Xvlr8etTDh~ zH+qA1FPfNN=bV$U$_{&w&l^1_REHp7O4+=1b4=r+>{F zJz}v137f{^?qY}leL_mwIf;h)#KP2$@ky@pJwsMfjkzVxOw~oop1wSB86Z#E4XT z@RsOP5gsq4QI%Q#rAz&e71cMl|C^R(y%bQy;I z=SraX>8v=nGuK(Qwce=wMqWCe%!=cD?vBcuIAC&p;8EwnXh!KY)$5|VY9g~bYoanc zYopFCEbk`%)_U7iNk+F+dH6k@OPRtu!fW|{B~$mW6rG`^P9mMg|(`OwEA(}UJ(8eEa{%8cMe z%`O7PK5(|??Uy0VT|B4)+wy5mxdFml#Mz~8&TD!I`8A0Vy9 z_LYqv+(tyYkaA?dME-0IVQF zq6on(SOc)SW|R7tuYcQIk^a?H%$GdpFj7aqHr3b^DfUK#a1 z1%xQI+DKBV)IxZTwM^89h-xhu@a^wm+Hf4=b(#WY-J3M zntBML_NYog>eV&+tKxaMLl*~)Q9x2sae`0zr?5OP9ponQ9Z5$f0xfVrUsEr;ZEmLZ zzu3Y9W2TT=H9Pe@c?1a<8hSkmdIs)AmE+0`hl$i@S+5i(+8GNE>~;xS&2k6 z&H+5_A3=)xrPCLtkWR;}m6~bAM3wdqP9%TAHz4izE`}h|E6c!V97&vKp~gD3BR}D| zq)>H7mlts>H9RPj8PD3TEl9gcM4ub4xZqVWCTHxs&b}jAxdIp?eZ+&1i3cr|bE6eJ zNt(*JjbP4uHo}2$*i)qYnsq_zoNa9ui${ZSJP_@f-1>9)PibQ?0?M|6b-x(+1)Y?f zW*)*dZzB(^lAMws+SM-aZ(W6Kt~@AzN$b^?E6^ZY6htkSvC|S{q45O2aUJTNyWuGr z%RE(3ad~f1UNkvN9Gem&2`a(A@g-jV=Jt;wRv&hR94als=IV3Vc`+hRq#?sJ#t86S zRV2}$%8OgA%)m{3f!~o&zJGE8J(=}OEs+NbiN829N#(8n-Yby^$|$iNS!8W!ucpP2 zh@1sXVW7MuRhd+mt_t>)L-!~K4+Os2<%%7S9VZ}2CqF1Ij&~sytX# zm#$Hiq{;({!UaqYDMn3;hhD2bhQhpsaK+vjh3_!~%tE-2YOpH34hR`f@__ApPq7XR z6fA=70*d{S?l8&Uu&>Iw0?@tlh%6j+?umfI=!E>h!V0uVbN&)Fz23yK*~(I-)#@mv zhx7G~E2PjyyG+L)KSpRHeo7bg^1U$+^^}&D0vrpJw4o4iDNiEJElS7|{c#Wtn*zy$ zH^+50mDecSgrdLqtL*>omLX6;f$9i88pDAxlnMZ(CKMSbj&n1u*@uQ$EbBR0gBN_i za~iADLC8Zzc5udg%(^8Mn6m^kxHlhvlwT@%L+j=^&k8)FB8(p!Cn86|wejcDAqU;U zqr?!T=T`OWv#H>7z$QF4L@jNekHMRviw=Qwu5_My=y5gvw<2x#jIX>(>)h;pU;HRu z4!v#dCsv@do11eI-U8dSM)y7v4}B_g)>g?C(}x2VBCw{Q%=c~lx3{eZ@BI9z)fV)r zId5^Oxu?3(`Fp{XZ>*3Z3_K2^e_eM6zd&IQ@FQW2#Ob+N*I9jO!J?GJd?V6w@6ufM z2J(rQNelv%U*DODS1a4gBJGim|J+X8o`Nu!e3$2^Ij1=2*1ZZY#d&6sq__z0ZtVVZ z%b@`1Vwk_qejRWsHAN!<@&$7W%XUuQIX=*1$>iv>QAgDw>wv?W#}9!x{`}C2k$JN= zCaTH|y)81ceo_0D%K(8}^kLz-mYD0%z9}`;ALHZM>0euyk$Uf6X&&!%s^#-yDBrCf z8c(E+J?KL(`pMv&4DAlE8BjDo3=cWxRLd*^?lAzOuhp#56oxs`%_8+?z2M1E?yRO= zQ@i!sAJm+GC?7C(H2ZVUN(XadwV7^Fw|nXA{04o^3?sonr2X>u?#Yj!@t+x(RoTJ& z6TPNhzMN7k7=bS~_a_Pxq?eExi;EG+OK7L}E$!b%_;Z0ZlUV+=-j-PWd00{RGlh;?}k=%CeTjT3gH8S}klO z-cE{TlvhYs2G32%Ul`E}R@0~Cc;<7H^_E#ihG;W_N+Zn02X1Gb;|^{|d`gISN$vPb6iA3F7=ul4nrMeB6Y z*XQm7VkWpe4VXpfU+eMFaM3VIbb24aSPZAFLbS5=tS(aa?fUf!E=9uP#EzhpbuBPY zQ$oYO7;OpS+ttUSoS^aIlk6G?U3Qcf-(;O&w|~pSomd(FQ2*eZ;`*Cg4Ht~+R_;U7 zG*1wbjFGjFzxOaEddCv@3C?)J?>!L=pYD~CkOjz=7SenIVc z)*kS@Lr_avssNX67ObD=zEWqrym-PZ&h#5;d>goL@yeXy@sc>Kw{M&maZ0mb1Dq7= z{6`er;eHH;iOH33AW#bDI1sRT4|Q>Z>!P*U!U)Xz*6@&^wfdQ-jg6m~)r>vHwx1K5 zRNTV1ZZdGK61l%&K^-sQMq3SCD{x-6wMMlUo5U!}^Zmj<$*ePHX94rG_1O*t>`^JS z0mH<^inR_zOl>sxm`6LmKR7YhThXi3RMB&PllwK#Z)ue{h&rb({Q!uxKDj+GFHFA&Z ze4l{Gq>7VX%s=>geYaciqQHSuR|i%1y&m=(u>|Z?eHwv{KTOxa_W2G~&0f2}jLm%* zObOC9Xt+4r4eny%jmM5f+OPs{yf1`J0nyn(g$@MlHp=4b`?ixdO=}c9>CAOGjc+w6 zKXIuEBgQZ>Id!8!F3N3K0v4%h$g1*YXU0)~8k4uWS8wtDXRScS>lk&cJHrXdZxaa*E0_iv+lS{OF)}dP)V5I@OJP>2nDX zo-+~l_juI0*DOc3Ae~K1WW1WNb{8dL?XhpZgMSCsd;;M7t=eohrFscoVM9kddRA<> z4j_DA^}`RQ{cYf{w?(O1QEZ&*yN*Z1H?2wk-`wgXYdgN!d(4dHe{W=Gps5=uM& zs6F0!cNRdrQoq~f{&Bh)TmuqoOE7yfbaw4920bEo4KRPiPTm)k1NFRe4X;G*ZrTQe zN?$c1TWqgUorX6^!WMtQ*YhxV8~87K$A$rMu#mwxJ~l?O zz78iaDhNkh@=@Di*Caawo@j|?6aYm+*ZilMLlU}{gtskV88Cs}0V(j0gL#x&Xv&e1 z_7lIvR_c`sNHU&qLy8%+cu}=b!lm%&IhqnaCVFS#fUS=zl`Ct>yo4vk6u-(>U!;CX z`L&M0P-kEF5JOLUV)5e6%$A9xs$tc)^R`aO$RP00^a`i@enBS=l`jHG+2!qwpKr36 z_39rYrwrQMtQsmXcLJxux%04r>yAqrqfbnDi~EUbF~ChKf6IV++?TO?nIM~O&1Fiu zAuLZP_NZDiPKs>~!Vd=GI;gac+@dN+$6(;}cwKYSwj*XlT$m930rI*Pqr^r@f}Kcr z^X**{tEvE!Nela;kw3UMBNfPkRf#U~HFq`1uFg_FH~ZEXkPoipFdUIOy)&u5ZW94; zCOIbOR&{W&9kirDMstu9n~WP(V>?NGyCGbU7_L=z!W*>ZeW-*1VuHU9nR+_S&CWS_ z9^4@yQrXnl*Ur9^?vvj9smcmYKq-kZ-jI@VOCAy`-Pzor;FIKC~AnIxkg#JEFRE_du zH#B0&q+aZPUhF6-dB+q%QNXQ_XSDMmyplN_Y;5q}yR-|V~XBWrhISFaFAU8k6$!ku*yc^EJSGK*T z=KmJrv-}|W)j{&|Q29k__J?rgrdiT*(u&d(@*R>&7U2?b7&pUyR-wDvz_&Qyw99Xw zKbNE0@4L&_{_7xztJ>$S{4*m;MhQDpY&H;4L4auz-G8eDr11qq-w*6&e^fA8@^>Br z!b$u0v@3qp9<*DRuxmmcu?6CjG|@3k`KVi=D)YuWFKW~JOaVbnFj(b%KK&4}xuml7 zF64CBx^)%E!*m~Njk3gPT8+5sHpJ|qDdP~aq;(PO9%T5M_-^B_`~<+cm8-v=e?OG8 z*~-cl?h1o^ZZvONyYo0m+b^TgXw@OB-2?`GgGoNA*A^e%{NH5$Z)T`L)kW06IxI=<98b%6lU} zd;iB+CHAF5u!l=cJK>D$!T?2$D0_BP5;hA=VVhZf#%kkFlZ?@=RQAxazhDq`AhEds zgq7{P%O6U_+S`NmGG>G^_TNOB>Eo_1pG_M4=u(X_vqNHs79c<)55!(1c}OC*V*}wO z8{dE%PE)z|3zSu&W$!s?u>Xg-9gr~?|U0uB@mjb^C5Ev3=!e?GFI*zjmb|Q4D zyu~u@3=`&LVB1jIu!OhXiT)16P)2N6vDfmM}z$}e0Zi01L{OR))P zfu4}63BO`^8d`|I>r7G-zM8sey-&v|J?^%A((R=D$5wrax+(Cr*S?+LTU!C?AKFm% zThH_E@opW=^W-w@Hdz;)ORAL#zf~Aa6PkSkl2;ipB!Ak2QaYfg45d#1{WD2wx+u<) zA5zwZN{xUE@R2E}ozxcj?YE|}u?71ENSjIfgV}DJQ@1F~XP8Usa0{iV?=qWQpO2;v zZ%*CsfgO2a=)0Qsufd);lqckn+HkfGu_YUS*8xkbMMbG+PZ-5pIx5W9xDWu(4{*Ae z;MPsxlNSsOfn>me1GePI-i?ZjASVHTm#mzJl7?24ui?0DtQoTo zs!1+h#mj{W!Mq+g-|#}8Zy>e5meHZgrj4= z8?!cubAI>-pzZ=nX>G6<7U{7Tqq%Fdj{ zJ6-jjMV`da96|v>(2xaDnTc#7lvUN*e}?e2EZ#%xDgF@TCuW;Nd)!MzhF#ilBPbjN zUh&S~9u>OfdG`);J-nG1Jyp5fYHt>9{t)nNR%I0Sb;+PHh2|qcnGMo#QJl8w2aXxPeRIhTR9(X3!3R|_iCoR%=rf{e*YNuQ9J2MWPNq6ar z4!pI1Hcme~o3T7?Cn}71MA!X4BthWHg7F$S4~b?XA~449yUJQg`8$lGAYb32RT5)I zYp5d03mRD>Vh_R)3Wq#$U)jJeROYo@y{cnAjje|rbW=m_5v zdRhre4peW9JI6TY%}C1-uZa$T%TOO)MRQaN5+_TXK*8h&?#~4G3<`vF_JKn4B}QuG zWJA+`gV)!p1{Mu(u^pqXhCoacn)1(OF^k+Q143^xvVp zbL#KqOr9Ywh(R))QuiPaAe%G_qZz4~f;t^%wO@@YTXY1Mi1bq`U5>vt73?g58&5gA zGXtii)TcZ5eX>j{;)dPC|}Y;umdv*NnW%@a{bJ%bE9HM1yc^v49`?q&f!})o1m8}dVgcOqEpVx4TXOF@ru2`4y|3%+mhgT=W*RK8 z6(O@ep%JM|2AZRqIayLNy6|@Ka`{9v@5Cqi3d8uB4@&O^R@KgztCSwA@*G zejM6|)v@YSADEAE&J1%pcDX={?om(r#j7lDc9prji1zFK94xnCq5@^uO7aSZC05 zUNoyxd;YU#6dH<5$q{+ee{cxV;hLJs1^_YMsC=+b2Myj7GTY!a-XaVP@^r~n;5w-WnAY*kzmT$khfH&2ouL;on2i6_id@}sdR_6ReKn5@%}+F;L77DhvpWU# zR~PA$Lq(#_o)&Wd<$LE~$tH=!EFUNI+jRfk>=llRTR6cNap8$|?)VBVD91|dUAvex z4XE1lnX>E3xizcj@L_rUw+d)z`dP94nYb?R{>wC-2Wlp;wi=T(-|~XCVfGxN_6vh? z%O@zB3xze{mlYEogz~r)a~g_R!$qCdnJxh~9m-+< zUmHO+y#4ztJ!HJx;|xB;xnC|B?y6|d&&cRFbVA{Cxacs%4@gSJABt?8;h}6>RY)}U zb}k9K%06AjC<<$gIWC|eRg^(GEI}<5tiQ&0=7o96u#nP;%kfs=YF1SYoL;_|fqk%i zcYjn!!PA&59|J*g$S^xB^IAkIuG}MgpS-PX%t$xj)nXn}Snn`HfyZRcbwbgi^)=FD zs6EYAuv}CSJnQ6K_r6wz`$U7Gvh4EHB^h>UCRfN0>oF8QmleUAP=ENiR0;ep?5Ol1bMx<)P ztE$4zlNy*+vINO|PA7Ftq~gOIq0xAyhbD?C3aK`Ca&m7+=AbkI7Y(t#-b~w4x4H>u zZj^{xVV|S9z?36&D-|;2K51ql2!9gKrM(;xDaXF~J}@LE+sg!Tq`(lp4;Ai?l>b_^H}p9?N?P7 zRV(TIQAf_v`BC%S#^2;KEadAi;3bMhZ=9n7j^D%HhYl3gyyy<+^p#}IH+p>p4I>>- zw{&}XL?ScctP8us^h=)3WUiI)AbUe~H~o+&(hV9zDQ<)?dmhg;tZSyNkSKf!btpCc zm31j1>wLBpRv`YAS8^1dobY9?6!C7|e{PfB>sVKWPadRukA#v!b(vRHhXx<1k}NVz zA&n@DOMSSa1CaEZr1Qc9y0`qCHF0z6pl^ZoF$ia4Lg4a`fI&`~0(aoLagn+LQRlq|N5^ zAo?@Ty_40YcT(~JErnoFdR*_*r;T>$0D)ulk34{L2mpz=&?+f^;>O=4ZRfvdPTZ#M zx~)lhvVJ4yn>s?eeeZjjL=Y<9{s&aT4?=5{ZP?qoUOTkK1S_$(jNz z*h0Td6Ql>gJg;ZuO-W6E2>{ur0Ok9R5*P^K&cZ-$X5avZT%h=U!L(!^9B-Jyhlz~s zj9V8rTdqPRthzZZx1Lg6)q<1a1_o5keeHD;K_r_i!DZ5-6g0+b0Q$R*b|>%Z>HMFT zUP}nh?9$2{7&Z-IJ2+%5cq_Hl;YtTzhIJKRG7Qe5N3Q_~%5no`Jsq7tz})-WD7O9m z1A&SYcZZZ4FE5lR#{yqqy*2uG&M%%XD>_(xw_5yI*1|4wb;yuWmVlRmS0?QP++|gB zKYxLG@PAH&(tK)a1R7t+O?NXfhvdf*9}gpO7D`)n|5rxvc=^t{UL!E`&pX(Tml8^17>keUn3>qx z_9L=9pXlpN>w0}2baie1xNG~4aEF#*Qx>e4uAb8tATslC7%o9xQ!$=jE_X*CVQ(cj zt}IhkSE-cMl?pfKZDh11MfN=`+faqx>Zx1Ou+!y=nyU5fY>MsY@k@|BGrB%#I&fMy zf7hQMyJvp?-Xrgd)H@t_M6Yz)-%q=y{(RZqbke$g)YT?gIsND76uQQ)aAI{;TV0Te z@t9P)qS(&4Bf{aTRn|ste}4HEdCt|Ps-evg+l9%YLdZI~68eRYJi;uE+=( zy^}oQq7v`}YQUPoHF>1bgKy<2UAm3$u`IoWwkzme$12f8jI200yT!cXn)Vf@plwr% z-BhJX%=S6ry14`6?As!${;kAcOG{^H#qcJ>TwY;4qze*QhNm77#{DRX9CcvsvmK>v zXHOd}i_?jQ0%(1K`;y*ys0JjN1KW}kq$CXAMaKJE)9GT8$L0*PTpikq$arjiTgC9c z0MXNIIk91iyVMQ8uU zLx2A$raTpYXSZbU+t<*ba!q?oSJJLW2WS#E{5i8%_eRN_EOSx@h0EWSdPq0Yde526 zMsj0FOZ@-%8sBdjQ?B9TMqw}+!xpW2vVoOo$3vn|?*Dyxxe6SAQ39 zr}o=50!rC%N7bOy()6@2%<7C^)zpoujsV|rSO3JAl$Z*CT{W0^43YrJ_Mn~?;Q2Aj zd3Dkz=BEy?I7rBkCljCkJEYP;yF5|ucJ(;9gp94ebyloA9_F{nrbSsP7Au+WbZ)t^ ze9qsp)l0SXl?>D$-RZT}Gb)M87O3hX+x)fy_TH-_BOCf2@VMIzlF*J$*=Zt8L!(BR zTETTx2nyZ7gQhq1?GWmDTs`;EhQ85}V+55CSXm@0=3d%KPU~pyaU2D~hiJ(>hp_C2 zqSERdTekq`t%i}cCBccsRay4VLGDNNIGk-8UXIXnAFZ-=7uLeIlanMi33PpWqwGzZGc^&=nRnea|NaiXT#nC$KguRg@; zFjIWnUqNM&XRbUl%s3GJK&>n3u{D$lGy7*ta5~oM@T^4#>P+7MLU#X4uda)UYWq6k zz3wU|dWDqT;HmmB;tp0I3qB5^%}2CY9sWZ~qv}cWPqOz#awYkt zVfMKTxtqb&36J<(y-k6*{Go|<^2nP?XLx;d4Oo1rBJAW;$YLuQ?P3oWpZMX9ftu~R*EY_5 z>qxKAn}=;AoSJlH)-f#}#G4B4{I$Hh2uEFMx!joWsF~ooB)hs%I&KH;M`>RX{u zppQp9s+yUpG8&cB;`Wa`y;aBL<&N%mu$7#ct}8v{IlaZZ5 z=Zq!ATK!0?TvF(_71yry!WnJoSz3fFUExbel3UtEw-Cd>$K)?;JKtu#>kZqP{YrS_#AOR!cJRfQ$C&JWVVDMyly zLYXAKMK@e#{8`quROGJhxW@|h21{q&-^sT-qBk4wAa}2+LTLUe`D=yE%`~!&m;dQp z^Rse1!g_VVt8}YVd}~=Kb&KS0C0xZ>O05*hZ^(wj(LXfpj?Ltv2gj zo8?Ha&UZ5`5o>v?l+mGht-Qj4$}B;K*S85};;G9chJ`QG=>2rtb9JnpBl?`eIEl08 z=F8#vJ7>(744v9t$Nn5!hks;X6vl6}u0eqaY>4|9XCt>DZ~Z{tULNz&c1aGSL$$ev z65-Dm;A_w05pn{E{A-9!a0?dI)PUjhOP!6*ZEg-q_%@``%^}1Idxd&YNmfpta)EM1 z&RUkbaOAbpSEY9-TX`D!9r>%W4Jryw`9t|r#SViZe<6Rv*rQ|A?vR9|{=&j7ajm`3 z9#wZr`#owb!W-}fozU3pz0hm`9__JPUUN*ob?Iu32|rp z;kgF3`_32QV@_zB`;`4u!hd$xDOa20WWvcA?On%R#~mt3*&W9n#uA)vzN8Pqkp@@8H+}ttZw5(A?hRnQ>%D5kf1xQip0-5#VERy0HuB#4XRgf zb-G*_%N++ublNIM#GVdz$~vmkTjRb=*K(NNEugEZdHhGvZ3=6HEjCLRzdeFE0oX)7 zxkqdEzTys>VMG}2Y&qaOYTX-Em=toaod7orjI7}FYP7j3?FLS4rMtiskCPWEIKdHW zkTR6eV&dsj%fKEjVTzk`^Y7?1WFRaVrU76Cf;a{N8y;#fUq(YJxDqy{6sL(Qzgr|< zTp)2LI~YSUY(&;c()klTBjOkFI^I@rEht}`=}2MBxg?|{J$Jt&7HtMYDna2fN{boQ zP`M?VbKqnur#jT(B?*1#y6e$2szFjX?!3eW28EfE_{ z5Z5feEJ4dm=;L*?TbY`i`5n))QA#!1CwiHc51K$u)Sb^-%!#K(M9x5?C{R{pY?G{9 zI8Ny%ES#_@NnN&NtLCIm^Zw7?Sr#}eyUL#GU%Li(pajnQ?EiJ*rHbr0*CYGnEAue| zWbHU}Hi41@^`6J98-3-YuMD5!(ezb$i}Ge;kinU_E6UXSAt{Z>rnBBLo3|CdTj#P) z>#+3d*L^d`u1QC%+jU)z+jxH7UWLk(m^2EVnVWHB>E@UNxLY1Rlq`Gft}!F=UNfri zNks3P>pkmn2PCm2@}SA3!t**oDuLcZX9^2a$-%@x43$EZhDiO6m_Xzq9#n4qn-$u3 zwrt|f%dPMg*kK41v0d)X^U18T!x8iYdNmW93$@Z1@d$f*-xkI3G13H5CV-D@o?KVa zpOpJ&g7BCCl0`|`k#s4C9-;_@IFM4PRB$Q-SxuYTi}&+2B-&RZr>_BEkOW6iu0HSQT6zh@E+HVE_|mVKdIxxk8`>1o!DGj-sSrnCDQ&I zXOi=DGG0uOBRfl;Fg`o7AH&WekdqSmQ&UOR$NU5#A+Oa3NQXY4Q`HpCe7r)w&$Y$1 z9#KxO2rMM47A#8d%Paw{pLz3Pjy^%6@B;TDR0rTw=z~q2&(;o0mcIVc?FS;mN$jhL zoGYn2JEhaS=%ril>EShyttwvSo-rYb-8%qn$t^8EcVb>;nW95!=uZ`UuXQ+NQ_LD#8ldFQlyV_ z8HXb>1RRuE-_{gBurj>nfll`}UR0XDDRo=S6+Sd5ZX@FnDtDj4vPxo}(%t{AB*>(d z)E=s3(*NbiN^unI%{*&L$8QE%m_qn0VNpTH{VTY6%{GUaZg zuKcylw5TpaOh234XZoLP(=yv!^^_y0E?1bU@>yW%9UfOlfx$jY+qzNL&<0zYOH9myL{1h`)?iN&`dd|p}^n! z7iWqFt?}fCgs5W3CA=oLvS`R4-gv;)OrWhPdkYsRW^eYJf9z13NEw#vp2vP{7nYM9 z@z^+`AT4w1v@^RXAqyE^1G zVw`VIzDvSXlD}vkciQLJQ687Z7k>%5uqox8f!!zyy=j=owihOFIgy-@n4H}nMx$i+ zNr1riQ}Ca9vDMU~rRM_Hb#a>)6=&YvwCPqv(OUE-VECHS0RM1( zorRg7`C$_of#;R$EI$ml@aH&?&=3{}=9!!PONO3bm9Moo%xB_11kiGu5mzo%(E(|W*UN~m%89UW)1r-Q6OpSdONsqpjp2Ot(n^TqzQUf6`KywCiL*z>t6&C{%i zl^o^l9z^GW2ADjOt;6+-B{T(sGCl4f9rw~S+mk;$^ z{DUY6{rJd1(1Yq-c<;e!@mgz;u;U~(pzH-z+=z%j16r!JPW}TrHQZXizX1Y6<^?BO z>fEHteIFEep{Lq@NJZn`0j*X}C-YA_sZz!L7^r+oC9Dz@*r6B#%+y0JUf{XM+K%O5 z%i3qnkSH@DwvS;Aj9W0tm<|xay8t7gsAFAfq1ziNn1Nst8}HI`b4nqlDr&X`5))(f z2xedul)Z1uE9MQZ@9iBK85=uoc&NO%c>jSQwHz`$bH)`l)%uP=gGf}ueTlDLjo?s$ z$T}5ud;K1)P$#w5?b-M*wYsf7Jq>*bN=t96o0S<2VG8A`>R3+Zx-H=ZzDv3TI}~_K zKtLVAwuzKs9gFZR1mcOv5vZ!nbzL3Lx~ZL2ELrwDN$p|S%de~@7J19UTnUIAz$3Xb zBA{fs!4ZjJMc%bOP?dhKKW@dKc3pQ`#P7^m*Q^50?~bvs@PM~rDTwCYGo3SZGSKnk z?+^E_RQ~`_rlfhpY%0L9PhA9Y0^}0ZSl-pTiU5kN?3J{ed?992iu_-l6d{b!&^W!t97dh zt7nGy_wxIp0OCNv9gF-c`XYb@lTt1dK~s=an=7sdI8z6JnXxl+3Q#O@-IZ2egk}Z0 z0NvAKnfBV9U1WS~unHP@bWsc3!=yc;6FTAu1aU(z(Z1hH`ZnY_K+X}&rnLV!+k=fM zuj4ibZPja!&x;?05_)@ycKx-r#X}Mc>+MGqt@D(qX?TwE6ZjpAfQr9ybd8y6PZFl%4DfeL*&Dg(7b!f@w@i zj2)gy4>kF`dEl4hKLCM*hk<;r)>UOKhti_VXkzQIEM2{_TZJ zSRGrEJGS)UgfvCVXd%c#L9NT*Y8S5)TFE?oI%csOp`rtcAC`KWJiqwjRGUIa5yKXTRWOv{SP zW~}#b%gqQ$4{p!(NZ1vb%^hjkaaCt$>W$?o(}$)MX&&`08eyybb!p7YG%R6zo*-_% zStPKyoB2rXYf2eo)Xqu>0XRU3bTL7ad5`M*r8uKfQO+qS=MBMea{fHE!s)9gRK)+3 zGEr4UzVlRwsD~847orT*s|ud!(keteAq12X;-#2i@|3Fuxm}VlUf-fCJ;$r{s!4na zUcM4f{b6{cyC;|9iA2y;QxZ}&f_wc(a05#XI2<80k7E^_AxkZi3@j^aVRxL^>^7Ob_S6Y5u&tBC9%x@o1b>UV_z88v6zBou;Epp^(tqoxe1)JWq zLX6^&05_3NIkO?P_-9EVGV6l`X-`5QxvUGiDtpMPA-yKLM%)l{sKHaApYP%5ZFJKr zR>ta)V`zM}lFFitCJ;qEqpd{*mMenOLQ0?}Q6evK!eo)(=gmy#4Aj$-=1%U@W5BBMycfgJo z<+z#TBC6zRsx;upeL|I~S2LO4tnTCPTW>U3X1UBFiyi*b(lapwM1ODEl)b=m!Cgax zs)TUQyg_+vu%c_pH&Y-?uFYz}stxr(**^XGbNVI!@#-+!DRmLGLAoH_IsJ$&UV9oN zc=#`&-lj}j7GUBqFRhj+iQGTJs9DV^hS-~73XFG2d*ZER&16FeF|U=j+1>c<+K}2u z@Qh@I5^9OOJeK2t@fz}^Qm^YU@G50lL$OYCNhp3UmL))Y2Dz9MFs%#?Dv?0Jg6 zV$n;z&Aa&yk);Mi$il9-nupzPd` zE|_1o6$aDR|F39^B74{v`DgM++YxH6-RBhHc@PHS!WFHDJ0Vz%JBr2|gZvgl3P`Au zDrfd`Es*{@GD$nKf$(JG`c#tFSn9+j5?tM87gVhG2bG)0no@J1-);F2$1UzJERG$^ z!aG&4y;ZW?-}$i+#C9!vg{PA}m2OW7If4M4@@s$}5mm11m5`mP?&6aY9t7@-65;LE02$&Il8gBz;kB!3emQ*ocX3=7?L3q^K^<&Wvva# zUN?1o&rq%0|9-~Q#t=VNTzFlgZ$^f1XC|I^HBYD3 zZ|f{GmD{RpOjP}!*2A^j8HP@71^HEAdZ%1e7tT#@_oYT_{jk zoYC=^^mrvQin?FQ<(`=5GG{>kMZlkz$!CV7NNT&wbm>j)`wods5$ZPfMozvB+hbn3 z$_4P*vb^oB@?(+J>#Tn*O5jA)U&jS5EAgRBQEY)vkpl?AWaR*0b(6cNAG|xM;nt>A z{bKECm@DWJeNT{G=H|2U?!oXA4%&&swIR$Ie`08u3B~;4AJYaBj>ma2FZLvTEi?nZ zt&lAOf%g)qqT3vOmf#tDkbYdp&o6E1+KA7wzyu&(gd{Qpp3RivH6z^TzQ9}$flyq6 zYgn_i4vfEaculM+#+4LLYzDw7UielyW-I#?baRbryb;>S%auyJsS~XD3||t4~R3@K@<}WEJcd zjW53+n)c0Z-w?3!@hQ;xFr@qIP$O6}Klwt(hO-f=DT_4=G?taDB ziL0FtwWGmVSeAtY#6csIUoe6elBkN7YK0{o7b8l^^Eh9nyqRV$=kLVG;VsUJUdArq z)+Y*#WOc#*?BavacnB;#a{um}vLlgYv6Hr?f$}OrTFuJcg~bzFQz~l=q4l-I?6iRN z=txez1Q%4YvL*RNorE2g7WsCJL4xMUV~SGWS(G+_;s9jp%)6^u+_C|s02>sC4g&o2 z%I|?6ij7Am2mcvk1Bg81^lzS*kS5}6^LKTOy+2GyT9mVtZk&y)O({e#^HrR2*0MXl z8}__A>JJ4CkL-_(?hL%f_GccAx3dwOxZNoM%F*4Ts-LBd|GBq$4tIQBeq`Tl1Fse) z$-Y42ook7pXevXu7dHH!|z2d*cX8Ip# z{kDk+QwQJGz|@gMRJxTHo|TnN72+7l0D(^>NgMu;YJ1l~a zd+L1`ge=mW+&!(obC2F`jEOzRx=%?v_9TC*?$U7b?ZPK%CTolz+&8Y-`n^Xk?)I?~ z=KYPj58d|7bo2leFzOp}1-0l6CmpT)Vq7_cs&apk+wKi)XKGK}+AVSn-2Rem@dINL z#q5j2H)&&SE7Ktrt3;Pw)%1zZVKF_?q&0DYi);pejt{L4Z139!)uW>&5tWg&8q$&d zYQzag_heKG!Vh)=FQfGN3H690_Uw-zsl86#zSUmA40w~A>_VB_ic2YEP&jVFGdTLc!J;94=7^~+UF+< zNCIV!sC4bz6>ob|mVG2|MHFKDu|Ju^*%g7ytnQ;hp$~Z#vu4}=nz2JK&Yzrn-PW^p zH+tlfj~$O1lh9a4wsxVi)&APsEmuCjxvgJ*nQPCZl*sXqh?JD>zp8fba>$!$f+iua zDk*`p2pw`s_3YAOK;`VJmL*L!(4BLWAx@jU>pj&oXv8I8fgM#d2C|Ni^?6o&433TD zaEK2G(`zg?uGZD9id`#v6ZZ7RMb4L8z!TJ7+0z8d)&qHN+mtRU9Z`CfO;5A))xZDg z5Jc}0?%gNsRF(fzT%s_TS5+r9`;@*qnIqw7&V@l0CCWuwx5}I~Vzttos}wd(F8f|_ z=hf}gw%S2n@nfyOw5crG$6I zp%;9$_}WhPcK~EzdnHly31gpm*wJT^{Zg}@pq#})IePD)ShWX2PM&-<`Pq@P5rmcNLB753es^X2f~1W|_^o1I&Auz<&NSHfmi1H{v*L*{8t1yQ(X;9&T25C| zsAdqu9a^S%sgey+x6K}}eIAnt%=gsI9;-#y+M;z{!1t|v+YOnluowS5*1R+1u|q-Z zY(re*qbEfU&Z#NaE{kF=E&9jzM?(Cx?wr_!^6p4Md|E|^d5p`g(|Peo=iEB~4ErRF zh7%`>ScUd>AIUQ&yLs~hR#8eXxw-$ENnYvG#oGz$Cp22`|5;lZeLnoelWrEDoY?Ec z(XHkg#iMrUtNv7PXIFaLyts14F>4KdP-E~eX8OgQ>Gl%) zOhDwfUV|;&&^PdKYJ_j8vAdjd&7|=9MB=uz3vh5tbn=1119BAlk5zrjBxh|(bdW(% zgS5kTt=-EE9B30N*|O!$n=SXX{aVm=CdFh(t7?2Sw@}6oIiU0VvEDyjU4ME7cN-Yn z?gAhY0DuS@cliIKOq<~k2bjRxdd(nuz=i1^xS-IfA=UUU1uG{kdYoc7`|b#Xrw=OM zt|W`z>W0p0&W0?4wKwWwL*|76731rYZ=NsO_g%q7tY|A9x)Qe|P)@2D$T|%l(#JfX zMB-BrUsE&?I}Xm)Oh+HAu9@BMv+P!1{UJxQsW_L2%A6&z_W~WQXK`JycUZaH!W$S8 zTzU&#h(ecFu=@;$&b!xo{p?gz`F5c6Y}3l{@X8Q{hE}*MBl?Qrp`5C-G8-wq!WLcaLM{2QQ?{dvP@$dI>&A3HC%GgKa ztTc_@6Pv%q*5q>Gt1sfz4Kot5m6GO^s4?rjQ(CK~6i zdwsMs1Mz*Gz4wgQ^`ae?U{VKF1Lt|CtO#jtqE;LlZe@7ico^8PsAKnrVR7J4wd7P6D5A~O2YX{c0+BVIFD-`b~(KTMT)m)-DY;4N7F!3bYEvH=O zw8lx8O++`GPZry{(&MdiRr(Cd6gpAbgPSotJJJa)tC;IL7~y*Bulimk@o|v6LcUr{ zicv)C=*D{m(wCNa$8TjNv?_26*A5mpe6=lfJYL;+*rU*5RQ~NMZVZ*>ea_pNZ_vui zp4TYz-2v~kvV*4t*Vd0agHj&rli=;pMSiD$>gx*yz$ZS@6+m89wm$!o-B&dWfWRd) zBUp(w^adi|w&%FD=xuj@46e86BP{5DEU`oNIO&#!omY;}Pd&uD;)WR9NcS5z>*GDn zw#CdEIxEo);gg;yPUWmT&BAUXT|3#V;Y11w3M+?AeFU{xVAkgs2kg)2)5z)!Pu0FclNz#B-?$EVx zRIcV37GXCe?rjqKeH@89VZ*=wZEG&XG}9j3=QpbHwgb3Jblr=TLi>CC5Z=!p^Pag{ zJ)@C-`z!cKp%?n5;pCV1cl7<~lW$I`F0YVM@gi%kPc>+=ycJ=&y+f5tkT4rhuZsO2 zP^%<_FS~nj%XM4964t<9X6s)fE|7QRc_i#ODI#xJh&waDG+HO*@{^)RCZ4SHZ`tfM z8=&%M$gBxl3p|iOUUic2NB0~0l+0H!Ij%(Fu`Z}fizb5rLM1#qf zAN<)s3GuptNw~=3G(7BVoI@h*V86&V=lrF?-ZvJ|iz@iPDW%5_Z0mX&NDg0$dQFsz0rFIT#po}Z_E^|Zy){2{g*c?4<954(@xJKZV&hT28|^%(^pbnZIM$^O~b&S73B9a06;F7-`6OMF4A)GeU>Yu5D5g*Vf-5?5YJ1dp zePd7h?(6*{Rv@AV`yI@sDV;hD&+cZRo~S6pz4B2W>hK^O^v8hSDyhm_!_~E)lC0r= z#4TWG_`oqKI=_g+1%}d@oEW#lZVx~$$j;q?+9y6^6DYEu@$b(*ET*ZkkyS8`E>WNE zuYc~_FN~yfRVub?qTZ2GF(xKEdz?Kyq#g-T0i_nTkYvM!QWY2_q?H||u~M%Iz@)v! z;-^MHA`*$t_7w<*Gp=CAKV9D zzVQDa3?B2({|te`TO+C0$IRgnyjljg?%FTFgb+DcO-7xl+lPA+;KAHC^8OwI$eEC_ zoZ6}6^v~iOw=0STXoj=H!~b(cW+5Rj*Tvd-#@P#d+_?16J@xKqFg%GB%&8}^@X zR`WtFMQJ$6w>hlP$ud00$Wwk!2}|3l#BkFmhr@!PhX;TvkrmdQ)^}r9M&I^hryi)D zOFzO|K}rzW#=50&H`KSh^I{;;X@~gs%S%ksU|q-SXUUFmBy1^%ar_IpqQSA!jaIQj zAErZ(Dr4_}{7bKCa(aIuku&JphqfHHvwSe)-$t{F4Pf*KTAM-ynNePz_IiCHA=Rl( zkFNM~A`8D;-WgJ|j2iEez)e5x$M6q^xF8d~A2*il3*iZeWK3inNGn*=>GxD{ox8U6 zmmfQwjNiLgwa?GnGmnOAK5F`>S6!f6_XPp^(SnyzRDSpeH#xOMojjXz1(lI$@uwi6p;$ww{h(GIasiWY zPNqh$6O~Kvd^tH$Q0JKT8e(BB{eB806#|h*7H(LOfIm86E^q;6E*~BO3n9X;L*ZtK z0EFL!S`Q@o-0y(;z84DW;nv-rT-b?fwzR8_a(2>Un=$(2z(zC+3ME1y5C|W+LJeyo zy>hZF9VDmpB<#ukT!}YJm8~`2bNBOZU&IW)(JS@!v7;4swY{exitI@gyIAUmMv+dfhbcfG*UTOs)P+I(p#t@!OC)kW`bXDpV+m32 zQe6$9zg=Zq6+<8pcMx9c%DT+}@R6RcS2o_NeM~}p`RLNInW(ciG4q{L3=Oo=aBe-4 zhYTGIVi1%aK0s>*v;G!Dwo=#E#*9J?z&vE@7DUWXOP%N5XL?HOGKFn#1;5>TO>PB6 z=Y2&>N5EH<oBbrabh`Y z3qxPPeo*Rf*7fjVt(nSzz%lTYK4RCYijmXYY1Vdz|C=^58FgO>oXI<8Y90f)FEJ;1 zuo*eGL^zva(I5q_x^62LE?U6y7-n(*xjw;K4$Q;zRFIk$&Y#Y#1od+^r|Rj;8V%R( zAMK!bqgD(btUxLF!RiQs_TYCHF{ly#yR%@@XzvLFrhHm=vXG0ahWAyo|7r8L4<2Ez ze|z{{=d%7Hs+SNo3y4_vAg@jLp+s0_Y{_c^VWW_Ex60Z2C$Kp-5+SFwF}5mTn4YdOpVi8d2WxACwK?(wTJ7cuFiuCig@(&A zgEey5VNpsJ3l760&i#KYjuu+MEUHha>Cb5GPYvig`Wn_)6$d?Fr%%7;Fo?knjuhXE z92|_iS3L4g9n3qx%6nV0z8;+X9Mfem#a_2Z=g7|8tiUaM3_89h9Nd=mR-qOdPaZvV zU54|#wa3x+G{%ohMtw0+tXBb0%6Z}wKu@K9YxnV{Tkk7@xnrLZ3`btN%croh%9}h$fRAg3r~5fEUv2F?ew`DbVpE%N4HtN`|X z@7sX+?i$ArIa94w60cVPfgw-I8luvbr0HO2z`8%1FPJ@_r1J_O@NdWYBKMgZ29G*8 zg7`r;0#-}LBc_p9t{=9DpovLw^l^_%g^umqc`VVmgF0SNL3I#*-`(pn%^z zi(q7tnQSt3*xDWcb`3V2HDc2J3z^5Qt+0Vh)Ax4k{O!>ek8cZzfQqim4V`ZjqnQdx z(U7G$5Q^v!FpB8NO^p2c?FoNVf63Sv5>6lX`~{ZOCQI)--3 zMF?UJO4^h4Fp!i>B9LI@M}JzM(bsOF*+^DaN~^NI7L!8ku06qi~X2%kd{V?eTHWTz%dFj>j}T?yx{aH-F$- z!1EKCceWN;HRa}>-su}K6gHFpzSEe^>d=ybAhaqe1GDJtfb)8{M;7W+JOM67IU?ua zLt)M#dW5c{id(*Z#ZW$)lHIgp1CiKTLjR9q%rtBs5W zfodp9m9*8I8?rixaawOBIU*p86`#rCgU{hKX~5E zfLHS{O)aaXH_{p(*qNT9?nrW0s4@z-krW+C>a^}W```%c;^ru~+~&Cz2JH`=4K;On zcWOd(h0Fit9Et`(k+84Uk8c+bhV@)!8#7tqj{3DsT<*%cYiuKP|8vmGf0Pc(ugn`1 zM-vX{V*f8|=Fr4KS}>OKauv=*xoCw%*cx#;;r>_a^PkdsvqK$>9XKFBtjQAq(?b{P z1vHU_w&I-e6^br5qrz32dtawq(GY--UwtDXe0r29F*3MMhmW1F1iG{Q~9EjEcD;1^ddH6j{7%L#klChR8DOCnXZb_w0aTTWQ>@HiwDn zXiP?u3auGPPhGwKgofVdqYaHs6`kSkBHP?m?b0!yP~g=H4_grO9=VMrfBomA;m43jr2Z+86zdY~WEfX1T?JdSS5b7@3(9@(KUv&Ewa!}^=C z@YNGDZC5VIdon8r*r%-S%XE?#V(@^K#Y&xm1eRmh3j`wSy~_nT3&qaEkycKV6N+Hs-MIds`6X-C(Is)myLbJty^QX0>P7dsg$8M5?956AuVueKNd@&q@_h!q62|?-?G{EKJ8TgR<=lmw&r=_zjry990o;ft^oeJW!XNQp~8D2yN6oL*2$1klFP$Ib8h(%=6y$c^E z9SBn+mem4qOQ6W_fJ7dc+W|!Uqze1UnhX5!>KaXmIYQROG)Lhc^JPHsW{!T|yE_A6 zez#XoYYNvxOabWejv!Qq=aqb*JC@yc=qcimvtdXUlD7<&z`5{xu03pdPWlw0Q(pS( z2H$u`hv}~{7^($k-^O?$Ww-;zxGtJGm8QVrTqp_$|0r&6L1|CjK($AN!?Ap4JMQH@8Aa9@G|DGS zJp4edx_k(Wm^5C1aS43oT;+fJhE^3H;_VxsF>s&{C0oWLQ`GO^BkV@$i~8dC&)6ff zs4b>Lq)GAG% zCM>7Si{DTetjkQUS>fL#IPk!rKK9ZN(LMOWTgTRS+&l&<2}2lu&Ljd{n5CXs$yqo5 zn^z=R;gf%{tX`0uapFcLMTOSc*Fn=1R}->PsT4QLd)4sht&fTkWD3zq%%hh)4} zR8UUkko^dEVzQ6B)SQD|9+UZIf7 zZ%2H-o#7)_Duaqe{pm=d2+@aDcwKEI@7mRmkxNQV&kr<4EvuIpZ&B+*8=b1Q+A`6{ z?Xw2DGjT72RG(eFDe)Z^JT@+BcyGTid_zHArdwk|>N2V0d_f7hdvAZxF|CzLd+`P` zK^0(6t?>*SMmW2|JEzqrAij$^5(E;)fIwnW!(Hx_qsq6@aV%EaZx^3DD)5r}_-wrq zUXg+bjRt zs}9U9vKC{UYi=(3%kOp>mLxwqi|>i1f$!Xx-^IZGV#j;m6U||I1Henb!|L9nWSK{6 zc~;i8yupR1TKTWdr8>9FCt8jbb7z|_0=ofETo*4Z-)Z|UgrzlV%04Kejtf14|32~v z%XS_L+w^xmH(Y}>z8~4(--vnf`hF?c$#EG@O928G0&}Tze)2hgJfheOYYm*>w|is( zhNj=vZ~4QXJD;`3TIh|0umt8o#8Qbgr*?9~txe5=meI2L63T#{my0IyUp}>PJYifW z5ZzK1^IvhFzs+wAKv*JBT~t-xFnPb|zIGYlcC-t3*6RJGbjn@jRn?ak?P=c&hddQS z)8g@Iu6R9TF?KgOiYR9J3hYhlYxCNKI+G{bstUVF>WU1N2KQimdCmwqMD4t$@imfe zj__3uI=VwEFFrX{$3`e4Wl5BLl}jPI+TqZWlWZ`kq%$_L*>1;7N0((PHcn*?FUyP? z?bMFf#j0v*)tcjX`n0X{W%b23a(vN(kl=)r_nW*Tlp6uNXgF)(=TFq0c zLvjk%ltSZ4o3d_nhuYSDwJpsfTH{u`f4kbqcKX&G8%(mSLIE3c`KKZ|#g{dn*uy#C z9)LJj2EOXJc&rC#>R)7D%Q};Mcx_h!D4(}}tKSX!P3n1pE2SwT5+%xlwV5Av{i=nX zf_~nwz83q3(TR&HxAdg9#Y+>Tlvs{~ukSqg&(UYA`!@i5U=V=K+SYm!u*OI*l^nFs zX=_=SJu=4@7UbdY`{iy8U;Ec}|5(5NM^{$TxsHyrfmvNIOFT;MRAg=zow&GJv+d^f zN=-IE;OBDPjhq|vPWxhNzVFjS9XPdoAkD%jgERm(*b+=Y{vkc#Nu?AQb$@#5Z4R2s zkY2spNmV+O5P<2JWdDuB-HZ}p4nJWsXaX;gu*7NZdBr=}*KP(;x{3JbZy?z3kdr8j z{(-f3BUf<-_~!{pVJD6ygusKR@**+z#_9 zUupR8uaaG&#iBsBkip|rei7U`8GFp^9aXe&t^7^>*;pOdkf8-?`ozgo>6@unIy&#s zKvoo!R@uIQMiy^b`(7xJK9Pg5Ifgw}#EUkT$JQsde_T;h7pswSZdX`o zBSt(hd087`3w@5%ml>7RcLn^BBO^zV(9mOrW?HmyHMOy3adL2Lc{&>mzfYG}-gIUR zvQ(uPmV|mCv`7+D_a;#4$`4*Z79Nbok%`0Y9Sy^dOFK>k@$5R(jS-`_ET71?$G^1j z#hG8oLeZ3y!I zIr!2KKxMG`e%y50jm)j5zrxdGk|6RbETSD?hO(x>^k(_Cb8uRYT*DnIqva{A%}LW! z%?zE2exenF<@3*R@AmFSnk+t(IaEI3HZ91nt3`wm?IQ@KIu4F2GPNIFgW1w-^5Tjr zzliSakOP*e2+4~lXJqpP?xT`+QJ^t(OKNuLq7nQ`U_{~f^uX0Vf+JtzdIy!v3*TE2yxCq+3 zmx2?LZ@vO7E!oLXgADFuhj0Py?`ao@9K$>RJRZX#?8>k$SNF?|r3xP5aU*ScE6enB zWo2B_tEVq_xcR+Q;G}N9c<1B3U&`F5BT65Q(LlpRp!gFOz}T3DZOMUSZxE8V`)k*N z1pVct^9@hQl-|Lh@LZ@r5e~>B@eQk=Zv)hL&FJlozmJ^-vaz?bkE?{3W4|B?9Wl#rhXOZA@F^c##c(~_f3A^44sA8$3F=Yvq)2`RJ&I76~~@H!P<-0mJstYKMk^W z-sKgB0TZBoVR*UQdEOeOoXp@X?j7Q1#^VJ=N6~R*JeikR;1#*8w0Kj3_tfuvYGkcg zlALYL&ie#>9tu!z{eYXNOosb&YI;j2*As}Sbr*4<{#7@5yMvCd+RmfXXPZ>?LQ~cW z43IOF(h6MlNq0h_;<>zwepxd2Xo4-M9|&lgk_ExSSZyl2d&6@uXGa3mru04xOC7_2 zeTxNLP5zdtLmE+qnSt>7%*McATI{_ggapmw$ba4 z)47KnvtHpDgRN8Gd6DmD&VU@!V-#;qkolx`T~Nfvh6ST*^iw;4i!0=K2GrR(yB425 zx1z7lCDO16g5L&2!UyWzO^JT`w>I_7nVv$&xDn16db~&w(;2%dxz5GWS!@?W+l%RL z3d>o2*5&Tx_q9OdM5w!~h?hpmOUgYmi z>Vw5{pBc#t(lo#3iIUn=PL(2~eA%106>GSzBJ4=nWSQ33(9U#p+#cGAG;K6Cc${!w zp!zL!oX6YK? zPhI&O*L7gLVKK|yzjQ0m;&LnK;Ar(MF>(?R5;318I+O4Ld6FyC$%e^z+pvXz{l~9jfQxHf$)q$Ogb2+$5*WC2&13Btc zb|lHGdOF1yW+UPX`?*(dB8OU(XM|dJ_Tb4nu{2yl-EaSin=LoZjtvhQzi(aj{?xA2 z*VWyZZK&l1(=@1>ty>FcK=r+|ygG0RWE?!6kGnY(sWxIc3{F3!r2vugB~K?sq}csb z*>s$l@E7}ykdc*@i7ikw)1dHV851~GR7?paz>g7f2uen=i2HLeyl+Me;22Ebi^j89XnvHWgModvFZwFxteCyK_{Pfc`AnRn$l{Z&4W~^yrjq~P04i4Zpid?a^vu2|4`97BKQtU=SAMAT@hYg!+U8x>1a5l(k z(q}(LUBdg{{}lW_cLmPA9Z(({PJO5ffHP+-XyQbV#q3g zT;LT1k;*N|TQC}{og&qHOz}EtP5mBAdbb~5M<8m&Gg_RNN?QpvQB7oRPq!G@8=J>B z8VMwEe~f5`3lqY{!Q7CL**EZwt*40;t%UYAGeSk~8_lQ|*+?I{(Im zM6Iwe%GQCFR)G>y@jLRz)B3 zs#dSsj8h|R7nSjZdgw`zOOz|qmmt4pks!F_i1;7XUbJ0Cz(oD zbOuVKkK|Bnk6Kha)c7r81k~>!B zER=eoTxlpY+10w!Bfp91QnDKHMfQA@lk!iHeX7{aKbI{xi%wg_XiI~7R5UWI*rr`y z^!fLsU!velyQi>BR}f)mg6~7VNUHx5Cl^>S*vrI`Z<0SPWEZ9&R|YV50^yR%glz0C zj^_?F*>#p(F`47~xliY!W(4pzl_dS-b`I^$h8ZYJC?-nae8$odxYcTT=i}WQ7mjw# zgHPv--!4z-8`0NNptNVs+m^UC1z+DSj!*7;(4E`?{$HGn|LQS+j9Ru$Q0Mt>bebJj zeHFCu_jeXCcIaMY8*LR0P}}X-l=Xj{ULfjIKh&6cNM6Gwm|=tRs{v=kVXMiX@6%dx zLr+l#>wYSMIwgGbo6<<=B7&|ga_(B{^Vooo`bkYEnk}vvDj;g377=`jAcR>i8tPZAUT~)gNk>lRbaFvK3 zWD?)4LaDVe;q?lv3x8skl7JoX=$CQQ5$dnY{d+OuLt=6)#YesFT(Z!;@3W#F*j9AdR6S@TTvC6kCu--xuKO z%(~|<I@d0!?Ze^g<`QT~8HQx3YR;=bu2MQm^$aQ*E}bi|yq7K?87K)e zIOR1`-F(r=sugj$^Ap%yeFiYZEoM{$$&hb1?k`=>>__`<5w)(jrLeMxqql7GaA1fgXZW_ zjvEU2!V#?mf)!f|A`)i0DSej9*3%r)yLVD@COY^44&(BZIhx9)@DVSl!MaX4p8KKq z`fH{%V$bXHe%>x*f>;tBe-NyB%F~m+M<(j^NpfhL1uyMtySiU9cTqyg`L1$AnkFsq z6g_0PLKn?PReWp!6$rgew@b@KNcI;?fa7)yDh+sN-vlFNb@|nwtz2Jv3>5G&e8d+0 zMCAq-v8Y+|q9y(P|LB1B`C^m}GWACf5Ja1!6V(gpsp~!%B}ww!q3$(WywZyIjim!W z92<}wiR&_v5hXwOdws{{;_Mwm=RE(ty!y3{ zO7313dtvL9vSs+|`jZOodR1h8n+I1VWOEFnPHv&PBLo z|3{e!zMSRyk!UU&*;xx-4>t=TA8X}|NUNAA>}1A@a7(gcyTggq!|Xi6)&Ako=o5S2 zUXOQo-+_dk%60*Z#ar~Lti@-T#T;J`U16m?8+_%l+iLiq_V+N3ZgWJrYDjU*$!)(2 z<)_E6eG}h?MP0}LQpqIG<`=jx|K^w2m{etqeH&7+1yp3E+52@f>Ge&c|1`!taDLo< z?Ry`q?!;wX3uJcBLmiO8CU-{@6GP)Jkq67jz-m(rI6PuXlqD)Mo#Yn{ChH^3JoTrG zN{>9^GkZ2n9r(P zVNJskC(vRmgm0vq83Mq~zJPen*TUaG+-9HenJyK%_2mtJdY=h$hfPnamJ?W$iA~csmYBI6DmDi%%vn=XSWpGJ$OI5;gcSJwdPv?1Bd?m)mrlW zJ$qNanNc{sn=d;)ub>`RBE8-p5O^f22~?p-NblrO5jkR>OJA>yzx33)aJQXOhx}y% zAT(BNCoiCnwv#i}>79@jCv4(F$c?~cRDW&gndWeF8Ks&EB9o7GLV`kfQjS*W)b-~v zA{NyEK`xZS&V+yB)1>beuI_yWiYqJKXzKy?}t9UZbjUEgSe|1tF`&$~7NYRvxz?25tbyRbAe27dHI>nK= zhFZv@J7UY@v$A8IIK8!;uFzE#&-hkIK)?Oi_omncEP)ih?^`@WT&zmKMw?T?<#o4U z0E8)}taVbxW+J)BL2Gbl_xbFzAvr)iZ3VB&Fx9X_9~Bil+GY$LJS= zu(5Qq>zQjyj)t^d=5&>>cV)U2e>0aOktkZ67U0 zzaM+qMdXXE-m{SRi^~!+B(O4a@kAOIV1Yw%G8S3NUieQ{ z@`=%UqY^ok@;kyO+gKB^0@B;C*l44)wZBY-*1Qa;46fTrGvSyB$(NFN(RSU!j=aC& zs@kBXkRq>@lPtu5@(S57qR9%?Y;QP_pGFKTOPJJ*b$G#`g0o5Lpng(K7L6wc3jJYE zWA0}1YjK`yIlTiswHaa`F{!pLv7c&OHR$c#KB35I#*r8{HOF<>-pm@HUn(9)gb)Xs z#151Dy*9Tqou2zX*1y)bliHDNv75X?7#8Q}CX<=cF^MlxPJYRL z-p&K{r<)xG@b8_zZd9^98(9sDS-EqmV61Mjgy?!Lw?{N4=>gDN{UaJDAK70tZ2{p5 zlnkJmk6~^j0Q_QM{ws;j60EQ7!~I=!pN;eDmxlL9lSupqM)~O5%<^qqBZ}TU5>iqk z^EYF-dmkjr4syM-(x8IJ>>X(~z%px4wL7VW#aO*`n;mmvcfSd%z?`X+%B-wS231>v z(KrLy%EF1C)|2f*5E z35$#~9)VjnVylbnQv7s3OXUi`B}S%VL!(I9^)G_4>bz0 z;Zt4&XL26;b3-Cs&%rH#+VWH+|IFIZt6OJVs}Xt1WQ|SF3I)v=1O12#J3fXC^gMC0 zmpv6?TBJm5Yhi(*-f+Zo2%wfnq>>3@0h^QXZa=F2ow?#!WWk+S@+?L|NjKAE8<$^| zLkfCH^7vpF7x&a36OtmKKNt5TLcQHU-^bSKx7K|$sy1u`od2T$QkJv0L!HFkrb>?h=_O48fmctYHQl!rtQL>13-$W5(BbyiJ}MoRrs*1IF91XV7YsfBa{aVl2s zx57pJzH2CNk3p4**K0Gw{VaQP^R_d?eA^{SWqYY-VH)tjNX6$lns%fag+BmciwTD; z{eVqUm4Mgr3)34~grHgkOhHM1NIlmK)DJ;NPEBY=^bL5fof%EdN2GAc*tSba|5 zd%Da_mCezJ-OR#}B5eCDOYKr|h*?#syewp!p-?V6K2h15S)NpCOho4^p0%JDK5iEh zx5E`Egfd;y$Z2-YWKQw6dL`Uh+8l`BJ0L5q7U=v+RZic}Zm1hu}UNe`mO z=LptzGSdq5EKUf?`+YG^;{mRZ>MEv&WAW2kl}mE-NCVt17>JK7Wgxm{we_u2<8t}k zhE3`2yO=e>c54;}iy6mEDa~O){1F{NO2EspIQ_)1BZPC>#dQK?im_j?!XC+>TvujUx`O zrP>n6kf(ZfC;SY5DVK1NYw{0LRH(j&?q7GP^!vy~O?pd-yJBaRdj5PM2kMk9%57Lq z8{48QQJxx3-?aAE)fi{#%_G-5f|VtP;dT|evh}ysUl}sn2)6>_4#d`5)A05UZPLX1 z02wc&ab>YE*| z00wzTjq#4xcwee33dNraE!<1rf#}rrLC>Ne*Hz+OPOl;ShcE&{W3yKE(nV^p6KB=` zRMYM@Oo1fB_Fum@?w?s^yJuO8^%W-k>^AFHd7i`>XSn}I49ca z=gHReK08-Pi5@6RFtZAuUM|6SAmr9D@_T~cKyi9ccIdqOV(_+7_q`0!Q~}bIJ)p&& zW{@X%7USX^sK)VIDH$%xZw&JAFK)XGZ*H5^hV7)=SIL`3%j>^td5j9#)xL!K>sfi& z?cYH2ZOjQlvHR&piRSs_6lh@}Fy1D3bWyLXRg>DSOkm@f2&XQ#-T~XVg*Xa+Hzzm> z(gA&X*`GJTi-N~5ukS-Mho#wx7!m1QlKQ3LjFDcuw^Q0VZ0*zsb4BrpU(-i{iRjxZ z4wO`zbg%Kr_q%?k8tX1bhjnJ%E;{f`!2~Od6BuwtlWYrt-E_9gK&;Y|FbP3`P{}?M z?*aFreO^3N5_5SLsoPEJFHiDa>%XbLV$8Z*TJ?HoymC7LVZcg7WTsE-x}QtvjkteE z)emmI$xS`a4?+LBe*!!~@gDlt&DDD1dMDe?TRB)09>_d7wn* z>B%%mKS|5ch9vpQtJwXuLJjOM2Z}vQpox06_V}qN{w1Hf;cu>$RMe=8G?PF*FVnZ< zlGv3(nC%)xH(B;wJMqlj{ebX1v|JYhFlX+7n zbOM7NWBYsG`uS@hqD#v^z^BId-Y#pPr(%W@#^g(|t?qMl-|B&F%?8!`c&j(aaz0d{ zGRmQ$2!<3KgmgVe;%z+tR>_L5{q2jsae_f=KcLhRe{PNxD2qyj1QLQAg#pu3`yOas zD@2DAgAQrzZLUC)(Avl_%KNLYno*aAk#w*|2=AMjyPsokxx--ms^V$9V1_pjI3=1Y z#8SZ|$E_JsT`3M5xPrvD%0an8oi56j=9s90h3n8&sNajoTxSRe2822S-r=;hF%2DM ze8e+Kre}(!T_RZ$(U4rL|I%ZzEV~EFNNeM@N8t6~7*%c>!R!d8lVXBl zVJWn=l4EWf;4AzSakR{LSO?S*SHc4=Xh6ACdK~c8lySDg_f`pkFa*>HU#k^?Mk*9{ za)hMXOej0CYjHfP@rr~g=bzpZWd>K)z(RWS24$;J{WoGXRRr;k!7#8hjdn`O-U8}5 zo6@7Qu$vlPAwxkd&&~X!a5-rWMK9dA?DB9=jmEx5D3{D5oiT{fXLI@`D=Ux#grhuG zD^+!nEA~NcC)v7i@}e#|#_(t9O%4YG-k=tCW>)%JiM~ScnO!i>TNad-?#I#}>v((J!f2=gHwtwVc_EHLQC){JFeq7&ps>W$Ag5{AA z5%-n%)m`Uk9s6B0JIB6kaJrH3z;!O?qLioid$n=1i4lrqDOhOBjy_{)&~}-)5yfq~ zDifYQW_zyMSN{T4L=Pc#ME$CI0va)*OlfjUkgHml<^y$ie%U+w2tv?6msX5G3P$2| z#}ZAU`GSWiS?V@OD{M@e!KF@7;%AG)l_V?oK94RRx+$P-W{4>of3`BKkt$%=Cw)rH zdIYbw;3}9c=gIK<(6$4kYGoOTejN0P^d6Erc!4g3XYGDqwO^ERSQsi+-!=}GN!)X>w*ji{P1H>wZ{UH6 zX{an&UKRFSLBQ>AVwy2F&Q`XK_T!efPgBi&dArxpzkCbg)}*sMQ3d!ynYcWix z_|npYGkjM4H_VCfl1lDfoX0C$VNvA=MKO()qiafz$U5Uzd^r!`sw6gjbZ`=$i^_!5*E*mpvGd zg5%DuZ3wIxm4a&5e0xsqmgD* zYGLt_w3+$h0%!yaVq;0um3t$XEA$yK5Pw|pv!C9zSh@wc?lNT5)5EG6KfIzyluy3k zUv3{ba}*4FG$(pmR^nCj0s#eCNQ4~D zqf!&>E;YJNTW#siz8Z?A8ZLGxgC714l~`@O#>4Wd5=#=oawdMM<77yT(2db7k@4Wp zE%_OM$dm`us47x}?QgqM7)?HZM=$E)8)}u-P|8J5me;Vs-QgJLa01hjt`-GZf4WXYs8)21~d#k7r)eGs%T zoTM@mjdY}?b}Wv#jHbE*Kz`zf{tRkAt>Qc*%XqotdNs+gjp4Eba2n*ly|eRwCt$ys zh~nX>+L&#zD&EyQzPT7a-T4FSO1;b<&IKtjfrbAlppEY|+K)W=f(08x4LSchxPcZ; z&=#FTV)*|ywEy4&Mhf@OGx`^f5+SBVpmLE zI=62U*W>|>NHHU*R5SE{tCw-<<`9FC;fkJ1!6_8;hau))x%lmF$sfp7&pD(kD96H)c$SxIVbZT_~A3 zq=}nfv}2Lwr=d1$v7i?b+##9FLkXQFg^h;+o~eoUixID_yyG_rQYZ@APz*{54#pA0 zKa>pR#RSC`{ME;>CYUt;d;KKSEM)0R4s_P8I^L$4pB(rX9NTKK(#8fN{R*CJBK6fj zg$x42U%7H@19J?CBoA$x)b)Wp621#55p_mM7E4!7(moooafA6ECF-Zt^1qol{;FtA zId&y37DAx8Lw|yrU@Kx3nm!Z4dtT`gHi}vb$}j&kSBP&eGZ2SUb=dNsnEsur&WEKT z)j_QnLZ)5KOXZBcM8xs9Gw{W^CwZ=9$>@IzmDQpcEd(2W&^0pw4EE)QCw7R^@bLL; z`;jKBD-xYQQ2yd6a!O3cQ1R6Y?8$v6opn%hlyAYLdyZByBqP$wt`$?@3G?GqjI-WI zFr(&N%W-LTiVx^1Ho9CEPW9Z5AOL?Gi|-iXg08;`9bHFOX<@)jh53F(ufGo7X8;-H z0l)YvMmC@|H(*Hq)5~Lc+wpVu7B-~+C=Jcxyn+Svys26)m~PyI-+W15v=_={`XO5l zHTRU5<6Q%(;GtU{_)M$_Z@txr^r;MoqLKj!*lxsJ-o*}P>e`FX{w*=TWA)e>mkquq zR>aObeoL>tvlW0b{B)@!*Q#MRNDVE1iwYTY0jEF7nOpwz-CzpVB)}t%DHnxnklM&j z{5nE-m_I0{MuyF@X{w^ZXId;$ZzxX3PofMm&=br2L2ZV2EG&HUL-^jmzMYczD$O`Z z?tN3awcrjqUCwXxK5<+SI?>|?PR!D$t||ghxxLKVr-Z6Dw@24}CgX^Pq}kM_7!5qg z%Z*9SS}A#;Gxrf6Yzc??{fJaAfRlxa)hoqd(HC= z7O1`LmWceuZ0Io0(jzpSr>;rS>W?x`vcp>fVVJl1r4thU;2&FV>(dCwX&XK8S-%w< z9R&H4wYnRLSj%_btvh@R$#$Oo0`rfNf}|CtyFYe$!fDRQ{TCn#B2oP}ys`rt2n8pY zPr*hy=n`c2!FY)-Q6avwsaI|ld#8}B@=2^@?xy>AgA!eO(n7ietiyp6B?7 zzEjdImQZsbH{m6+$_l~!C_p?uVA-?$aetr2!i(>2oJ8*9svS$rL?LjaYe}8@!`*TQ zq#ig1wLj@;6j;-piPNt2DLzE!!*!-C3&;{_h7O&)YC#HO4{G<&N_9zob7B%}yt1NC zn%`Mm`%Yl-g?yhDxiV;rXh^>0f5my?!*A)t)TMO`3`(N+D9}1!YxNnLK)>@{8hpI5 zD`Qq^)g>Q(N6@}yx=%cj9sNvX@vp)=nn6ncK;7JEiZgd^P2j%)6VR%zgBZHuTvAw6 z>wG|E*}P>alWtK8B}_gAdu^xWy(?U(@8_IgZ{Dg_YfH_i| zcEU*ZONGosHYDv&Sy(wA_rub(!|ZW;oHgD9RV~OgubHzEy>?~?K2bePVezxt2%>;P z-?ra7<4n?x&FYaE?cEGI)-)$tD$5+muBu}U?sPHFKe+hV5?aCTUXV`J=9AHC=o-*Q zXUuT@-0>M!)m+!o+T(oHaeB!5lJUF^EcXIqSUNsvI7$4;|X#{w!e5pUJ_ zak1J+C*mxrK*L>l)}}XDmB5!T;U_ev;jCB9B2`6t)Wa`7=7pam>YPepUHy>E1}-i| zx=cTq2|P}#Ey5pcy4D8*2oic4dykynV%zxoUkQ#ZS%}$Wd?mL`_nI;G*TmEF^KJp z_vh{DE5H7`9RZOzAku0+?DJ`Ocwh zS7jB5f%YHF1(sTSKSuTtezZh?ey859@nDV}*wx8We3^(^>c;D^k{15Qf0gLJdBw#% zK4AOfnWngIHTLC=dT)#w{3rZBSpE+*HU0+;Htp>`-fzW8*#W`aU5e&a;9&m+kS-Mo literal 0 HcmV?d00001 diff --git a/_site/site/public/font-awesome-4.7.0/less/animated.less b/_site/site/public/font-awesome-4.7.0/less/animated.less new file mode 100755 index 00000000..66ad52a5 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/animated.less @@ -0,0 +1,34 @@ +// Animated Icons +// -------------------------- + +.@{fa-css-prefix}-spin { + -webkit-animation: fa-spin 2s infinite linear; + animation: fa-spin 2s infinite linear; +} + +.@{fa-css-prefix}-pulse { + -webkit-animation: fa-spin 1s infinite steps(8); + animation: fa-spin 1s infinite steps(8); +} + +@-webkit-keyframes fa-spin { + 0% { + -webkit-transform: rotate(0deg); + transform: rotate(0deg); + } + 100% { + -webkit-transform: rotate(359deg); + transform: rotate(359deg); + } +} + +@keyframes fa-spin { + 0% { + -webkit-transform: rotate(0deg); + transform: rotate(0deg); + } + 100% { + -webkit-transform: rotate(359deg); + transform: rotate(359deg); + } +} diff --git a/_site/site/public/font-awesome-4.7.0/less/bordered-pulled.less b/_site/site/public/font-awesome-4.7.0/less/bordered-pulled.less new file mode 100755 index 00000000..f1c8ad75 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/bordered-pulled.less @@ -0,0 +1,25 @@ +// Bordered & Pulled +// ------------------------- + +.@{fa-css-prefix}-border { + padding: .2em .25em .15em; + border: solid .08em @fa-border-color; + border-radius: .1em; +} + +.@{fa-css-prefix}-pull-left { float: left; } +.@{fa-css-prefix}-pull-right { float: right; } + +.@{fa-css-prefix} { + &.@{fa-css-prefix}-pull-left { margin-right: .3em; } + &.@{fa-css-prefix}-pull-right { margin-left: .3em; } +} + +/* Deprecated as of 4.4.0 */ +.pull-right { float: right; } +.pull-left { float: left; } + +.@{fa-css-prefix} { + &.pull-left { margin-right: .3em; } + &.pull-right { margin-left: .3em; } +} diff --git a/_site/site/public/font-awesome-4.7.0/less/core.less b/_site/site/public/font-awesome-4.7.0/less/core.less new file mode 100755 index 00000000..c577ac84 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/core.less @@ -0,0 +1,12 @@ +// Base Class Definition +// ------------------------- + +.@{fa-css-prefix} { + display: inline-block; + font: normal normal normal @fa-font-size-base/@fa-line-height-base FontAwesome; // shortening font declaration + font-size: inherit; // can't have font-size inherit on line above, so need to override + text-rendering: auto; // optimizelegibility throws things off #1094 + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; + +} diff --git a/_site/site/public/font-awesome-4.7.0/less/fixed-width.less b/_site/site/public/font-awesome-4.7.0/less/fixed-width.less new file mode 100755 index 00000000..110289f2 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/fixed-width.less @@ -0,0 +1,6 @@ +// Fixed Width Icons +// ------------------------- +.@{fa-css-prefix}-fw { + width: (18em / 14); + text-align: center; +} diff --git a/_site/site/public/font-awesome-4.7.0/less/font-awesome.less b/_site/site/public/font-awesome-4.7.0/less/font-awesome.less new file mode 100755 index 00000000..c3677def --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/font-awesome.less @@ -0,0 +1,18 @@ +/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */ + +@import "variables.less"; +@import "mixins.less"; +@import "path.less"; +@import "core.less"; +@import "larger.less"; +@import "fixed-width.less"; +@import "list.less"; +@import "bordered-pulled.less"; +@import "animated.less"; +@import "rotated-flipped.less"; +@import "stacked.less"; +@import "icons.less"; +@import "screen-reader.less"; diff --git a/_site/site/public/font-awesome-4.7.0/less/icons.less b/_site/site/public/font-awesome-4.7.0/less/icons.less new file mode 100755 index 00000000..159d6004 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/icons.less @@ -0,0 +1,789 @@ +/* Font Awesome uses the Unicode Private Use Area (PUA) to ensure screen + readers do not read off random characters that represent icons */ + +.@{fa-css-prefix}-glass:before { content: @fa-var-glass; } +.@{fa-css-prefix}-music:before { content: @fa-var-music; } +.@{fa-css-prefix}-search:before { content: @fa-var-search; } +.@{fa-css-prefix}-envelope-o:before { content: @fa-var-envelope-o; } +.@{fa-css-prefix}-heart:before { content: @fa-var-heart; } +.@{fa-css-prefix}-star:before { content: @fa-var-star; } +.@{fa-css-prefix}-star-o:before { content: @fa-var-star-o; } +.@{fa-css-prefix}-user:before { content: @fa-var-user; } +.@{fa-css-prefix}-film:before { content: @fa-var-film; } +.@{fa-css-prefix}-th-large:before { content: @fa-var-th-large; } +.@{fa-css-prefix}-th:before { content: @fa-var-th; } +.@{fa-css-prefix}-th-list:before { content: @fa-var-th-list; } +.@{fa-css-prefix}-check:before { content: @fa-var-check; } +.@{fa-css-prefix}-remove:before, +.@{fa-css-prefix}-close:before, +.@{fa-css-prefix}-times:before { content: @fa-var-times; } +.@{fa-css-prefix}-search-plus:before { content: @fa-var-search-plus; } +.@{fa-css-prefix}-search-minus:before { content: @fa-var-search-minus; } +.@{fa-css-prefix}-power-off:before { content: @fa-var-power-off; } +.@{fa-css-prefix}-signal:before { content: @fa-var-signal; } +.@{fa-css-prefix}-gear:before, +.@{fa-css-prefix}-cog:before { content: @fa-var-cog; } +.@{fa-css-prefix}-trash-o:before { content: @fa-var-trash-o; } +.@{fa-css-prefix}-home:before { content: @fa-var-home; } +.@{fa-css-prefix}-file-o:before { content: @fa-var-file-o; } +.@{fa-css-prefix}-clock-o:before { content: @fa-var-clock-o; } +.@{fa-css-prefix}-road:before { content: @fa-var-road; } +.@{fa-css-prefix}-download:before { content: @fa-var-download; } +.@{fa-css-prefix}-arrow-circle-o-down:before { content: @fa-var-arrow-circle-o-down; } +.@{fa-css-prefix}-arrow-circle-o-up:before { content: @fa-var-arrow-circle-o-up; } +.@{fa-css-prefix}-inbox:before { content: @fa-var-inbox; } +.@{fa-css-prefix}-play-circle-o:before { content: @fa-var-play-circle-o; } +.@{fa-css-prefix}-rotate-right:before, +.@{fa-css-prefix}-repeat:before { content: @fa-var-repeat; } +.@{fa-css-prefix}-refresh:before { content: @fa-var-refresh; } +.@{fa-css-prefix}-list-alt:before { content: @fa-var-list-alt; } +.@{fa-css-prefix}-lock:before { content: @fa-var-lock; } +.@{fa-css-prefix}-flag:before { content: @fa-var-flag; } +.@{fa-css-prefix}-headphones:before { content: @fa-var-headphones; } +.@{fa-css-prefix}-volume-off:before { content: @fa-var-volume-off; } +.@{fa-css-prefix}-volume-down:before { content: @fa-var-volume-down; } +.@{fa-css-prefix}-volume-up:before { content: @fa-var-volume-up; } +.@{fa-css-prefix}-qrcode:before { content: @fa-var-qrcode; } +.@{fa-css-prefix}-barcode:before { content: @fa-var-barcode; } +.@{fa-css-prefix}-tag:before { content: @fa-var-tag; } +.@{fa-css-prefix}-tags:before { content: @fa-var-tags; } +.@{fa-css-prefix}-book:before { content: @fa-var-book; } +.@{fa-css-prefix}-bookmark:before { content: @fa-var-bookmark; } +.@{fa-css-prefix}-print:before { content: @fa-var-print; } +.@{fa-css-prefix}-camera:before { content: @fa-var-camera; } +.@{fa-css-prefix}-font:before { content: @fa-var-font; } +.@{fa-css-prefix}-bold:before { content: @fa-var-bold; } +.@{fa-css-prefix}-italic:before { content: @fa-var-italic; } +.@{fa-css-prefix}-text-height:before { content: @fa-var-text-height; } +.@{fa-css-prefix}-text-width:before { content: @fa-var-text-width; } +.@{fa-css-prefix}-align-left:before { content: @fa-var-align-left; } +.@{fa-css-prefix}-align-center:before { content: @fa-var-align-center; } +.@{fa-css-prefix}-align-right:before { content: @fa-var-align-right; } +.@{fa-css-prefix}-align-justify:before { content: @fa-var-align-justify; } +.@{fa-css-prefix}-list:before { content: @fa-var-list; } +.@{fa-css-prefix}-dedent:before, +.@{fa-css-prefix}-outdent:before { content: @fa-var-outdent; } +.@{fa-css-prefix}-indent:before { content: @fa-var-indent; } +.@{fa-css-prefix}-video-camera:before { content: @fa-var-video-camera; } +.@{fa-css-prefix}-photo:before, +.@{fa-css-prefix}-image:before, +.@{fa-css-prefix}-picture-o:before { content: @fa-var-picture-o; } +.@{fa-css-prefix}-pencil:before { content: @fa-var-pencil; } +.@{fa-css-prefix}-map-marker:before { content: @fa-var-map-marker; } +.@{fa-css-prefix}-adjust:before { content: @fa-var-adjust; } +.@{fa-css-prefix}-tint:before { content: @fa-var-tint; } +.@{fa-css-prefix}-edit:before, +.@{fa-css-prefix}-pencil-square-o:before { content: @fa-var-pencil-square-o; } +.@{fa-css-prefix}-share-square-o:before { content: @fa-var-share-square-o; } +.@{fa-css-prefix}-check-square-o:before { content: @fa-var-check-square-o; } +.@{fa-css-prefix}-arrows:before { content: @fa-var-arrows; } +.@{fa-css-prefix}-step-backward:before { content: @fa-var-step-backward; } +.@{fa-css-prefix}-fast-backward:before { content: @fa-var-fast-backward; } +.@{fa-css-prefix}-backward:before { content: @fa-var-backward; } +.@{fa-css-prefix}-play:before { content: @fa-var-play; } +.@{fa-css-prefix}-pause:before { content: @fa-var-pause; } +.@{fa-css-prefix}-stop:before { content: @fa-var-stop; } +.@{fa-css-prefix}-forward:before { content: @fa-var-forward; } +.@{fa-css-prefix}-fast-forward:before { content: @fa-var-fast-forward; } +.@{fa-css-prefix}-step-forward:before { content: @fa-var-step-forward; } +.@{fa-css-prefix}-eject:before { content: @fa-var-eject; } +.@{fa-css-prefix}-chevron-left:before { content: @fa-var-chevron-left; } +.@{fa-css-prefix}-chevron-right:before { content: @fa-var-chevron-right; } +.@{fa-css-prefix}-plus-circle:before { content: @fa-var-plus-circle; } +.@{fa-css-prefix}-minus-circle:before { content: @fa-var-minus-circle; } +.@{fa-css-prefix}-times-circle:before { content: @fa-var-times-circle; } +.@{fa-css-prefix}-check-circle:before { content: @fa-var-check-circle; } +.@{fa-css-prefix}-question-circle:before { content: @fa-var-question-circle; } +.@{fa-css-prefix}-info-circle:before { content: @fa-var-info-circle; } +.@{fa-css-prefix}-crosshairs:before { content: @fa-var-crosshairs; } +.@{fa-css-prefix}-times-circle-o:before { content: @fa-var-times-circle-o; } +.@{fa-css-prefix}-check-circle-o:before { content: @fa-var-check-circle-o; } +.@{fa-css-prefix}-ban:before { content: @fa-var-ban; } +.@{fa-css-prefix}-arrow-left:before { content: @fa-var-arrow-left; } +.@{fa-css-prefix}-arrow-right:before { content: @fa-var-arrow-right; } +.@{fa-css-prefix}-arrow-up:before { content: @fa-var-arrow-up; } +.@{fa-css-prefix}-arrow-down:before { content: @fa-var-arrow-down; } +.@{fa-css-prefix}-mail-forward:before, +.@{fa-css-prefix}-share:before { content: @fa-var-share; } +.@{fa-css-prefix}-expand:before { content: @fa-var-expand; } +.@{fa-css-prefix}-compress:before { content: @fa-var-compress; } +.@{fa-css-prefix}-plus:before { content: @fa-var-plus; } +.@{fa-css-prefix}-minus:before { content: @fa-var-minus; } +.@{fa-css-prefix}-asterisk:before { content: @fa-var-asterisk; } +.@{fa-css-prefix}-exclamation-circle:before { content: @fa-var-exclamation-circle; } +.@{fa-css-prefix}-gift:before { content: @fa-var-gift; } +.@{fa-css-prefix}-leaf:before { content: @fa-var-leaf; } +.@{fa-css-prefix}-fire:before { content: @fa-var-fire; } +.@{fa-css-prefix}-eye:before { content: @fa-var-eye; } +.@{fa-css-prefix}-eye-slash:before { content: @fa-var-eye-slash; } +.@{fa-css-prefix}-warning:before, +.@{fa-css-prefix}-exclamation-triangle:before { content: @fa-var-exclamation-triangle; } +.@{fa-css-prefix}-plane:before { content: @fa-var-plane; } +.@{fa-css-prefix}-calendar:before { content: @fa-var-calendar; } +.@{fa-css-prefix}-random:before { content: @fa-var-random; } +.@{fa-css-prefix}-comment:before { content: @fa-var-comment; } +.@{fa-css-prefix}-magnet:before { content: @fa-var-magnet; } +.@{fa-css-prefix}-chevron-up:before { content: @fa-var-chevron-up; } +.@{fa-css-prefix}-chevron-down:before { content: @fa-var-chevron-down; } +.@{fa-css-prefix}-retweet:before { content: @fa-var-retweet; } +.@{fa-css-prefix}-shopping-cart:before { content: @fa-var-shopping-cart; } +.@{fa-css-prefix}-folder:before { content: @fa-var-folder; } +.@{fa-css-prefix}-folder-open:before { content: @fa-var-folder-open; } +.@{fa-css-prefix}-arrows-v:before { content: @fa-var-arrows-v; } +.@{fa-css-prefix}-arrows-h:before { content: @fa-var-arrows-h; } +.@{fa-css-prefix}-bar-chart-o:before, +.@{fa-css-prefix}-bar-chart:before { content: @fa-var-bar-chart; } +.@{fa-css-prefix}-twitter-square:before { content: @fa-var-twitter-square; } +.@{fa-css-prefix}-facebook-square:before { content: @fa-var-facebook-square; } +.@{fa-css-prefix}-camera-retro:before { content: @fa-var-camera-retro; } +.@{fa-css-prefix}-key:before { content: @fa-var-key; } +.@{fa-css-prefix}-gears:before, +.@{fa-css-prefix}-cogs:before { content: @fa-var-cogs; } +.@{fa-css-prefix}-comments:before { content: @fa-var-comments; } +.@{fa-css-prefix}-thumbs-o-up:before { content: @fa-var-thumbs-o-up; } +.@{fa-css-prefix}-thumbs-o-down:before { content: @fa-var-thumbs-o-down; } +.@{fa-css-prefix}-star-half:before { content: @fa-var-star-half; } +.@{fa-css-prefix}-heart-o:before { content: @fa-var-heart-o; } +.@{fa-css-prefix}-sign-out:before { content: @fa-var-sign-out; } +.@{fa-css-prefix}-linkedin-square:before { content: @fa-var-linkedin-square; } +.@{fa-css-prefix}-thumb-tack:before { content: @fa-var-thumb-tack; } +.@{fa-css-prefix}-external-link:before { content: @fa-var-external-link; } +.@{fa-css-prefix}-sign-in:before { content: @fa-var-sign-in; } +.@{fa-css-prefix}-trophy:before { content: @fa-var-trophy; } +.@{fa-css-prefix}-github-square:before { content: @fa-var-github-square; } +.@{fa-css-prefix}-upload:before { content: @fa-var-upload; } +.@{fa-css-prefix}-lemon-o:before { content: @fa-var-lemon-o; } +.@{fa-css-prefix}-phone:before { content: @fa-var-phone; } +.@{fa-css-prefix}-square-o:before { content: @fa-var-square-o; } +.@{fa-css-prefix}-bookmark-o:before { content: @fa-var-bookmark-o; } +.@{fa-css-prefix}-phone-square:before { content: @fa-var-phone-square; } +.@{fa-css-prefix}-twitter:before { content: @fa-var-twitter; } +.@{fa-css-prefix}-facebook-f:before, +.@{fa-css-prefix}-facebook:before { content: @fa-var-facebook; } +.@{fa-css-prefix}-github:before { content: @fa-var-github; } +.@{fa-css-prefix}-unlock:before { content: @fa-var-unlock; } +.@{fa-css-prefix}-credit-card:before { content: @fa-var-credit-card; } +.@{fa-css-prefix}-feed:before, +.@{fa-css-prefix}-rss:before { content: @fa-var-rss; } +.@{fa-css-prefix}-hdd-o:before { content: @fa-var-hdd-o; } +.@{fa-css-prefix}-bullhorn:before { content: @fa-var-bullhorn; } +.@{fa-css-prefix}-bell:before { content: @fa-var-bell; } +.@{fa-css-prefix}-certificate:before { content: @fa-var-certificate; } +.@{fa-css-prefix}-hand-o-right:before { content: @fa-var-hand-o-right; } +.@{fa-css-prefix}-hand-o-left:before { content: @fa-var-hand-o-left; } +.@{fa-css-prefix}-hand-o-up:before { content: @fa-var-hand-o-up; } +.@{fa-css-prefix}-hand-o-down:before { content: @fa-var-hand-o-down; } +.@{fa-css-prefix}-arrow-circle-left:before { content: @fa-var-arrow-circle-left; } +.@{fa-css-prefix}-arrow-circle-right:before { content: @fa-var-arrow-circle-right; } +.@{fa-css-prefix}-arrow-circle-up:before { content: @fa-var-arrow-circle-up; } +.@{fa-css-prefix}-arrow-circle-down:before { content: @fa-var-arrow-circle-down; } +.@{fa-css-prefix}-globe:before { content: @fa-var-globe; } +.@{fa-css-prefix}-wrench:before { content: @fa-var-wrench; } +.@{fa-css-prefix}-tasks:before { content: @fa-var-tasks; } +.@{fa-css-prefix}-filter:before { content: @fa-var-filter; } +.@{fa-css-prefix}-briefcase:before { content: @fa-var-briefcase; } +.@{fa-css-prefix}-arrows-alt:before { content: @fa-var-arrows-alt; } +.@{fa-css-prefix}-group:before, +.@{fa-css-prefix}-users:before { content: @fa-var-users; } +.@{fa-css-prefix}-chain:before, +.@{fa-css-prefix}-link:before { content: @fa-var-link; } +.@{fa-css-prefix}-cloud:before { content: @fa-var-cloud; } +.@{fa-css-prefix}-flask:before { content: @fa-var-flask; } +.@{fa-css-prefix}-cut:before, +.@{fa-css-prefix}-scissors:before { content: @fa-var-scissors; } +.@{fa-css-prefix}-copy:before, +.@{fa-css-prefix}-files-o:before { content: @fa-var-files-o; } +.@{fa-css-prefix}-paperclip:before { content: @fa-var-paperclip; } +.@{fa-css-prefix}-save:before, +.@{fa-css-prefix}-floppy-o:before { content: @fa-var-floppy-o; } +.@{fa-css-prefix}-square:before { content: @fa-var-square; } +.@{fa-css-prefix}-navicon:before, +.@{fa-css-prefix}-reorder:before, +.@{fa-css-prefix}-bars:before { content: @fa-var-bars; } +.@{fa-css-prefix}-list-ul:before { content: @fa-var-list-ul; } +.@{fa-css-prefix}-list-ol:before { content: @fa-var-list-ol; } +.@{fa-css-prefix}-strikethrough:before { content: @fa-var-strikethrough; } +.@{fa-css-prefix}-underline:before { content: @fa-var-underline; } +.@{fa-css-prefix}-table:before { content: @fa-var-table; } +.@{fa-css-prefix}-magic:before { content: @fa-var-magic; } +.@{fa-css-prefix}-truck:before { content: @fa-var-truck; } +.@{fa-css-prefix}-pinterest:before { content: @fa-var-pinterest; } +.@{fa-css-prefix}-pinterest-square:before { content: @fa-var-pinterest-square; } +.@{fa-css-prefix}-google-plus-square:before { content: @fa-var-google-plus-square; } +.@{fa-css-prefix}-google-plus:before { content: @fa-var-google-plus; } +.@{fa-css-prefix}-money:before { content: @fa-var-money; } +.@{fa-css-prefix}-caret-down:before { content: @fa-var-caret-down; } +.@{fa-css-prefix}-caret-up:before { content: @fa-var-caret-up; } +.@{fa-css-prefix}-caret-left:before { content: @fa-var-caret-left; } +.@{fa-css-prefix}-caret-right:before { content: @fa-var-caret-right; } +.@{fa-css-prefix}-columns:before { content: @fa-var-columns; } +.@{fa-css-prefix}-unsorted:before, +.@{fa-css-prefix}-sort:before { content: @fa-var-sort; } +.@{fa-css-prefix}-sort-down:before, +.@{fa-css-prefix}-sort-desc:before { content: @fa-var-sort-desc; } +.@{fa-css-prefix}-sort-up:before, +.@{fa-css-prefix}-sort-asc:before { content: @fa-var-sort-asc; } +.@{fa-css-prefix}-envelope:before { content: @fa-var-envelope; } +.@{fa-css-prefix}-linkedin:before { content: @fa-var-linkedin; } +.@{fa-css-prefix}-rotate-left:before, +.@{fa-css-prefix}-undo:before { content: @fa-var-undo; } +.@{fa-css-prefix}-legal:before, +.@{fa-css-prefix}-gavel:before { content: @fa-var-gavel; } +.@{fa-css-prefix}-dashboard:before, +.@{fa-css-prefix}-tachometer:before { content: @fa-var-tachometer; } +.@{fa-css-prefix}-comment-o:before { content: @fa-var-comment-o; } +.@{fa-css-prefix}-comments-o:before { content: @fa-var-comments-o; } +.@{fa-css-prefix}-flash:before, +.@{fa-css-prefix}-bolt:before { content: @fa-var-bolt; } +.@{fa-css-prefix}-sitemap:before { content: @fa-var-sitemap; } +.@{fa-css-prefix}-umbrella:before { content: @fa-var-umbrella; } +.@{fa-css-prefix}-paste:before, +.@{fa-css-prefix}-clipboard:before { content: @fa-var-clipboard; } +.@{fa-css-prefix}-lightbulb-o:before { content: @fa-var-lightbulb-o; } +.@{fa-css-prefix}-exchange:before { content: @fa-var-exchange; } +.@{fa-css-prefix}-cloud-download:before { content: @fa-var-cloud-download; } +.@{fa-css-prefix}-cloud-upload:before { content: @fa-var-cloud-upload; } +.@{fa-css-prefix}-user-md:before { content: @fa-var-user-md; } +.@{fa-css-prefix}-stethoscope:before { content: @fa-var-stethoscope; } +.@{fa-css-prefix}-suitcase:before { content: @fa-var-suitcase; } +.@{fa-css-prefix}-bell-o:before { content: @fa-var-bell-o; } +.@{fa-css-prefix}-coffee:before { content: @fa-var-coffee; } +.@{fa-css-prefix}-cutlery:before { content: @fa-var-cutlery; } +.@{fa-css-prefix}-file-text-o:before { content: @fa-var-file-text-o; } +.@{fa-css-prefix}-building-o:before { content: @fa-var-building-o; } +.@{fa-css-prefix}-hospital-o:before { content: @fa-var-hospital-o; } +.@{fa-css-prefix}-ambulance:before { content: @fa-var-ambulance; } +.@{fa-css-prefix}-medkit:before { content: @fa-var-medkit; } +.@{fa-css-prefix}-fighter-jet:before { content: @fa-var-fighter-jet; } +.@{fa-css-prefix}-beer:before { content: @fa-var-beer; } +.@{fa-css-prefix}-h-square:before { content: @fa-var-h-square; } +.@{fa-css-prefix}-plus-square:before { content: @fa-var-plus-square; } +.@{fa-css-prefix}-angle-double-left:before { content: @fa-var-angle-double-left; } +.@{fa-css-prefix}-angle-double-right:before { content: @fa-var-angle-double-right; } +.@{fa-css-prefix}-angle-double-up:before { content: @fa-var-angle-double-up; } +.@{fa-css-prefix}-angle-double-down:before { content: @fa-var-angle-double-down; } +.@{fa-css-prefix}-angle-left:before { content: @fa-var-angle-left; } +.@{fa-css-prefix}-angle-right:before { content: @fa-var-angle-right; } +.@{fa-css-prefix}-angle-up:before { content: @fa-var-angle-up; } +.@{fa-css-prefix}-angle-down:before { content: @fa-var-angle-down; } +.@{fa-css-prefix}-desktop:before { content: @fa-var-desktop; } +.@{fa-css-prefix}-laptop:before { content: @fa-var-laptop; } +.@{fa-css-prefix}-tablet:before { content: @fa-var-tablet; } +.@{fa-css-prefix}-mobile-phone:before, +.@{fa-css-prefix}-mobile:before { content: @fa-var-mobile; } +.@{fa-css-prefix}-circle-o:before { content: @fa-var-circle-o; } +.@{fa-css-prefix}-quote-left:before { content: @fa-var-quote-left; } +.@{fa-css-prefix}-quote-right:before { content: @fa-var-quote-right; } +.@{fa-css-prefix}-spinner:before { content: @fa-var-spinner; } +.@{fa-css-prefix}-circle:before { content: @fa-var-circle; } +.@{fa-css-prefix}-mail-reply:before, +.@{fa-css-prefix}-reply:before { content: @fa-var-reply; } +.@{fa-css-prefix}-github-alt:before { content: @fa-var-github-alt; } +.@{fa-css-prefix}-folder-o:before { content: @fa-var-folder-o; } +.@{fa-css-prefix}-folder-open-o:before { content: @fa-var-folder-open-o; } +.@{fa-css-prefix}-smile-o:before { content: @fa-var-smile-o; } +.@{fa-css-prefix}-frown-o:before { content: @fa-var-frown-o; } +.@{fa-css-prefix}-meh-o:before { content: @fa-var-meh-o; } +.@{fa-css-prefix}-gamepad:before { content: @fa-var-gamepad; } +.@{fa-css-prefix}-keyboard-o:before { content: @fa-var-keyboard-o; } +.@{fa-css-prefix}-flag-o:before { content: @fa-var-flag-o; } +.@{fa-css-prefix}-flag-checkered:before { content: @fa-var-flag-checkered; } +.@{fa-css-prefix}-terminal:before { content: @fa-var-terminal; } +.@{fa-css-prefix}-code:before { content: @fa-var-code; } +.@{fa-css-prefix}-mail-reply-all:before, +.@{fa-css-prefix}-reply-all:before { content: @fa-var-reply-all; } +.@{fa-css-prefix}-star-half-empty:before, +.@{fa-css-prefix}-star-half-full:before, +.@{fa-css-prefix}-star-half-o:before { content: @fa-var-star-half-o; } +.@{fa-css-prefix}-location-arrow:before { content: @fa-var-location-arrow; } +.@{fa-css-prefix}-crop:before { content: @fa-var-crop; } +.@{fa-css-prefix}-code-fork:before { content: @fa-var-code-fork; } +.@{fa-css-prefix}-unlink:before, +.@{fa-css-prefix}-chain-broken:before { content: @fa-var-chain-broken; } +.@{fa-css-prefix}-question:before { content: @fa-var-question; } +.@{fa-css-prefix}-info:before { content: @fa-var-info; } +.@{fa-css-prefix}-exclamation:before { content: @fa-var-exclamation; } +.@{fa-css-prefix}-superscript:before { content: @fa-var-superscript; } +.@{fa-css-prefix}-subscript:before { content: @fa-var-subscript; } +.@{fa-css-prefix}-eraser:before { content: @fa-var-eraser; } +.@{fa-css-prefix}-puzzle-piece:before { content: @fa-var-puzzle-piece; } +.@{fa-css-prefix}-microphone:before { content: @fa-var-microphone; } +.@{fa-css-prefix}-microphone-slash:before { content: @fa-var-microphone-slash; } +.@{fa-css-prefix}-shield:before { content: @fa-var-shield; } +.@{fa-css-prefix}-calendar-o:before { content: @fa-var-calendar-o; } +.@{fa-css-prefix}-fire-extinguisher:before { content: @fa-var-fire-extinguisher; } +.@{fa-css-prefix}-rocket:before { content: @fa-var-rocket; } +.@{fa-css-prefix}-maxcdn:before { content: @fa-var-maxcdn; } +.@{fa-css-prefix}-chevron-circle-left:before { content: @fa-var-chevron-circle-left; } +.@{fa-css-prefix}-chevron-circle-right:before { content: @fa-var-chevron-circle-right; } +.@{fa-css-prefix}-chevron-circle-up:before { content: @fa-var-chevron-circle-up; } +.@{fa-css-prefix}-chevron-circle-down:before { content: @fa-var-chevron-circle-down; } +.@{fa-css-prefix}-html5:before { content: @fa-var-html5; } +.@{fa-css-prefix}-css3:before { content: @fa-var-css3; } +.@{fa-css-prefix}-anchor:before { content: @fa-var-anchor; } +.@{fa-css-prefix}-unlock-alt:before { content: @fa-var-unlock-alt; } +.@{fa-css-prefix}-bullseye:before { content: @fa-var-bullseye; } +.@{fa-css-prefix}-ellipsis-h:before { content: @fa-var-ellipsis-h; } +.@{fa-css-prefix}-ellipsis-v:before { content: @fa-var-ellipsis-v; } +.@{fa-css-prefix}-rss-square:before { content: @fa-var-rss-square; } +.@{fa-css-prefix}-play-circle:before { content: @fa-var-play-circle; } +.@{fa-css-prefix}-ticket:before { content: @fa-var-ticket; } +.@{fa-css-prefix}-minus-square:before { content: @fa-var-minus-square; } +.@{fa-css-prefix}-minus-square-o:before { content: @fa-var-minus-square-o; } +.@{fa-css-prefix}-level-up:before { content: @fa-var-level-up; } +.@{fa-css-prefix}-level-down:before { content: @fa-var-level-down; } +.@{fa-css-prefix}-check-square:before { content: @fa-var-check-square; } +.@{fa-css-prefix}-pencil-square:before { content: @fa-var-pencil-square; } +.@{fa-css-prefix}-external-link-square:before { content: @fa-var-external-link-square; } +.@{fa-css-prefix}-share-square:before { content: @fa-var-share-square; } +.@{fa-css-prefix}-compass:before { content: @fa-var-compass; } +.@{fa-css-prefix}-toggle-down:before, +.@{fa-css-prefix}-caret-square-o-down:before { content: @fa-var-caret-square-o-down; } +.@{fa-css-prefix}-toggle-up:before, +.@{fa-css-prefix}-caret-square-o-up:before { content: @fa-var-caret-square-o-up; } +.@{fa-css-prefix}-toggle-right:before, +.@{fa-css-prefix}-caret-square-o-right:before { content: @fa-var-caret-square-o-right; } +.@{fa-css-prefix}-euro:before, +.@{fa-css-prefix}-eur:before { content: @fa-var-eur; } +.@{fa-css-prefix}-gbp:before { content: @fa-var-gbp; } +.@{fa-css-prefix}-dollar:before, +.@{fa-css-prefix}-usd:before { content: @fa-var-usd; } +.@{fa-css-prefix}-rupee:before, +.@{fa-css-prefix}-inr:before { content: @fa-var-inr; } +.@{fa-css-prefix}-cny:before, +.@{fa-css-prefix}-rmb:before, +.@{fa-css-prefix}-yen:before, +.@{fa-css-prefix}-jpy:before { content: @fa-var-jpy; } +.@{fa-css-prefix}-ruble:before, +.@{fa-css-prefix}-rouble:before, +.@{fa-css-prefix}-rub:before { content: @fa-var-rub; } +.@{fa-css-prefix}-won:before, +.@{fa-css-prefix}-krw:before { content: @fa-var-krw; } +.@{fa-css-prefix}-bitcoin:before, +.@{fa-css-prefix}-btc:before { content: @fa-var-btc; } +.@{fa-css-prefix}-file:before { content: @fa-var-file; } +.@{fa-css-prefix}-file-text:before { content: @fa-var-file-text; } +.@{fa-css-prefix}-sort-alpha-asc:before { content: @fa-var-sort-alpha-asc; } +.@{fa-css-prefix}-sort-alpha-desc:before { content: @fa-var-sort-alpha-desc; } +.@{fa-css-prefix}-sort-amount-asc:before { content: @fa-var-sort-amount-asc; } +.@{fa-css-prefix}-sort-amount-desc:before { content: @fa-var-sort-amount-desc; } +.@{fa-css-prefix}-sort-numeric-asc:before { content: @fa-var-sort-numeric-asc; } +.@{fa-css-prefix}-sort-numeric-desc:before { content: @fa-var-sort-numeric-desc; } +.@{fa-css-prefix}-thumbs-up:before { content: @fa-var-thumbs-up; } +.@{fa-css-prefix}-thumbs-down:before { content: @fa-var-thumbs-down; } +.@{fa-css-prefix}-youtube-square:before { content: @fa-var-youtube-square; } +.@{fa-css-prefix}-youtube:before { content: @fa-var-youtube; } +.@{fa-css-prefix}-xing:before { content: @fa-var-xing; } +.@{fa-css-prefix}-xing-square:before { content: @fa-var-xing-square; } +.@{fa-css-prefix}-youtube-play:before { content: @fa-var-youtube-play; } +.@{fa-css-prefix}-dropbox:before { content: @fa-var-dropbox; } +.@{fa-css-prefix}-stack-overflow:before { content: @fa-var-stack-overflow; } +.@{fa-css-prefix}-instagram:before { content: @fa-var-instagram; } +.@{fa-css-prefix}-flickr:before { content: @fa-var-flickr; } +.@{fa-css-prefix}-adn:before { content: @fa-var-adn; } +.@{fa-css-prefix}-bitbucket:before { content: @fa-var-bitbucket; } +.@{fa-css-prefix}-bitbucket-square:before { content: @fa-var-bitbucket-square; } +.@{fa-css-prefix}-tumblr:before { content: @fa-var-tumblr; } +.@{fa-css-prefix}-tumblr-square:before { content: @fa-var-tumblr-square; } +.@{fa-css-prefix}-long-arrow-down:before { content: @fa-var-long-arrow-down; } +.@{fa-css-prefix}-long-arrow-up:before { content: @fa-var-long-arrow-up; } +.@{fa-css-prefix}-long-arrow-left:before { content: @fa-var-long-arrow-left; } +.@{fa-css-prefix}-long-arrow-right:before { content: @fa-var-long-arrow-right; } +.@{fa-css-prefix}-apple:before { content: @fa-var-apple; } +.@{fa-css-prefix}-windows:before { content: @fa-var-windows; } +.@{fa-css-prefix}-android:before { content: @fa-var-android; } +.@{fa-css-prefix}-linux:before { content: @fa-var-linux; } +.@{fa-css-prefix}-dribbble:before { content: @fa-var-dribbble; } +.@{fa-css-prefix}-skype:before { content: @fa-var-skype; } +.@{fa-css-prefix}-foursquare:before { content: @fa-var-foursquare; } +.@{fa-css-prefix}-trello:before { content: @fa-var-trello; } +.@{fa-css-prefix}-female:before { content: @fa-var-female; } +.@{fa-css-prefix}-male:before { content: @fa-var-male; } +.@{fa-css-prefix}-gittip:before, +.@{fa-css-prefix}-gratipay:before { content: @fa-var-gratipay; } +.@{fa-css-prefix}-sun-o:before { content: @fa-var-sun-o; } +.@{fa-css-prefix}-moon-o:before { content: @fa-var-moon-o; } +.@{fa-css-prefix}-archive:before { content: @fa-var-archive; } +.@{fa-css-prefix}-bug:before { content: @fa-var-bug; } +.@{fa-css-prefix}-vk:before { content: @fa-var-vk; } +.@{fa-css-prefix}-weibo:before { content: @fa-var-weibo; } +.@{fa-css-prefix}-renren:before { content: @fa-var-renren; } +.@{fa-css-prefix}-pagelines:before { content: @fa-var-pagelines; } +.@{fa-css-prefix}-stack-exchange:before { content: @fa-var-stack-exchange; } +.@{fa-css-prefix}-arrow-circle-o-right:before { content: @fa-var-arrow-circle-o-right; } +.@{fa-css-prefix}-arrow-circle-o-left:before { content: @fa-var-arrow-circle-o-left; } +.@{fa-css-prefix}-toggle-left:before, +.@{fa-css-prefix}-caret-square-o-left:before { content: @fa-var-caret-square-o-left; } +.@{fa-css-prefix}-dot-circle-o:before { content: @fa-var-dot-circle-o; } +.@{fa-css-prefix}-wheelchair:before { content: @fa-var-wheelchair; } +.@{fa-css-prefix}-vimeo-square:before { content: @fa-var-vimeo-square; } +.@{fa-css-prefix}-turkish-lira:before, +.@{fa-css-prefix}-try:before { content: @fa-var-try; } +.@{fa-css-prefix}-plus-square-o:before { content: @fa-var-plus-square-o; } +.@{fa-css-prefix}-space-shuttle:before { content: @fa-var-space-shuttle; } +.@{fa-css-prefix}-slack:before { content: @fa-var-slack; } +.@{fa-css-prefix}-envelope-square:before { content: @fa-var-envelope-square; } +.@{fa-css-prefix}-wordpress:before { content: @fa-var-wordpress; } +.@{fa-css-prefix}-openid:before { content: @fa-var-openid; } +.@{fa-css-prefix}-institution:before, +.@{fa-css-prefix}-bank:before, +.@{fa-css-prefix}-university:before { content: @fa-var-university; } +.@{fa-css-prefix}-mortar-board:before, +.@{fa-css-prefix}-graduation-cap:before { content: @fa-var-graduation-cap; } +.@{fa-css-prefix}-yahoo:before { content: @fa-var-yahoo; } +.@{fa-css-prefix}-google:before { content: @fa-var-google; } +.@{fa-css-prefix}-reddit:before { content: @fa-var-reddit; } +.@{fa-css-prefix}-reddit-square:before { content: @fa-var-reddit-square; } +.@{fa-css-prefix}-stumbleupon-circle:before { content: @fa-var-stumbleupon-circle; } +.@{fa-css-prefix}-stumbleupon:before { content: @fa-var-stumbleupon; } +.@{fa-css-prefix}-delicious:before { content: @fa-var-delicious; } +.@{fa-css-prefix}-digg:before { content: @fa-var-digg; } +.@{fa-css-prefix}-pied-piper-pp:before { content: @fa-var-pied-piper-pp; } +.@{fa-css-prefix}-pied-piper-alt:before { content: @fa-var-pied-piper-alt; } +.@{fa-css-prefix}-drupal:before { content: @fa-var-drupal; } +.@{fa-css-prefix}-joomla:before { content: @fa-var-joomla; } +.@{fa-css-prefix}-language:before { content: @fa-var-language; } +.@{fa-css-prefix}-fax:before { content: @fa-var-fax; } +.@{fa-css-prefix}-building:before { content: @fa-var-building; } +.@{fa-css-prefix}-child:before { content: @fa-var-child; } +.@{fa-css-prefix}-paw:before { content: @fa-var-paw; } +.@{fa-css-prefix}-spoon:before { content: @fa-var-spoon; } +.@{fa-css-prefix}-cube:before { content: @fa-var-cube; } +.@{fa-css-prefix}-cubes:before { content: @fa-var-cubes; } +.@{fa-css-prefix}-behance:before { content: @fa-var-behance; } +.@{fa-css-prefix}-behance-square:before { content: @fa-var-behance-square; } +.@{fa-css-prefix}-steam:before { content: @fa-var-steam; } +.@{fa-css-prefix}-steam-square:before { content: @fa-var-steam-square; } +.@{fa-css-prefix}-recycle:before { content: @fa-var-recycle; } +.@{fa-css-prefix}-automobile:before, +.@{fa-css-prefix}-car:before { content: @fa-var-car; } +.@{fa-css-prefix}-cab:before, +.@{fa-css-prefix}-taxi:before { content: @fa-var-taxi; } +.@{fa-css-prefix}-tree:before { content: @fa-var-tree; } +.@{fa-css-prefix}-spotify:before { content: @fa-var-spotify; } +.@{fa-css-prefix}-deviantart:before { content: @fa-var-deviantart; } +.@{fa-css-prefix}-soundcloud:before { content: @fa-var-soundcloud; } +.@{fa-css-prefix}-database:before { content: @fa-var-database; } +.@{fa-css-prefix}-file-pdf-o:before { content: @fa-var-file-pdf-o; } +.@{fa-css-prefix}-file-word-o:before { content: @fa-var-file-word-o; } +.@{fa-css-prefix}-file-excel-o:before { content: @fa-var-file-excel-o; } +.@{fa-css-prefix}-file-powerpoint-o:before { content: @fa-var-file-powerpoint-o; } +.@{fa-css-prefix}-file-photo-o:before, +.@{fa-css-prefix}-file-picture-o:before, +.@{fa-css-prefix}-file-image-o:before { content: @fa-var-file-image-o; } +.@{fa-css-prefix}-file-zip-o:before, +.@{fa-css-prefix}-file-archive-o:before { content: @fa-var-file-archive-o; } +.@{fa-css-prefix}-file-sound-o:before, +.@{fa-css-prefix}-file-audio-o:before { content: @fa-var-file-audio-o; } +.@{fa-css-prefix}-file-movie-o:before, +.@{fa-css-prefix}-file-video-o:before { content: @fa-var-file-video-o; } +.@{fa-css-prefix}-file-code-o:before { content: @fa-var-file-code-o; } +.@{fa-css-prefix}-vine:before { content: @fa-var-vine; } +.@{fa-css-prefix}-codepen:before { content: @fa-var-codepen; } +.@{fa-css-prefix}-jsfiddle:before { content: @fa-var-jsfiddle; } +.@{fa-css-prefix}-life-bouy:before, +.@{fa-css-prefix}-life-buoy:before, +.@{fa-css-prefix}-life-saver:before, +.@{fa-css-prefix}-support:before, +.@{fa-css-prefix}-life-ring:before { content: @fa-var-life-ring; } +.@{fa-css-prefix}-circle-o-notch:before { content: @fa-var-circle-o-notch; } +.@{fa-css-prefix}-ra:before, +.@{fa-css-prefix}-resistance:before, +.@{fa-css-prefix}-rebel:before { content: @fa-var-rebel; } +.@{fa-css-prefix}-ge:before, +.@{fa-css-prefix}-empire:before { content: @fa-var-empire; } +.@{fa-css-prefix}-git-square:before { content: @fa-var-git-square; } +.@{fa-css-prefix}-git:before { content: @fa-var-git; } +.@{fa-css-prefix}-y-combinator-square:before, +.@{fa-css-prefix}-yc-square:before, +.@{fa-css-prefix}-hacker-news:before { content: @fa-var-hacker-news; } +.@{fa-css-prefix}-tencent-weibo:before { content: @fa-var-tencent-weibo; } +.@{fa-css-prefix}-qq:before { content: @fa-var-qq; } +.@{fa-css-prefix}-wechat:before, +.@{fa-css-prefix}-weixin:before { content: @fa-var-weixin; } +.@{fa-css-prefix}-send:before, +.@{fa-css-prefix}-paper-plane:before { content: @fa-var-paper-plane; } +.@{fa-css-prefix}-send-o:before, +.@{fa-css-prefix}-paper-plane-o:before { content: @fa-var-paper-plane-o; } +.@{fa-css-prefix}-history:before { content: @fa-var-history; } +.@{fa-css-prefix}-circle-thin:before { content: @fa-var-circle-thin; } +.@{fa-css-prefix}-header:before { content: @fa-var-header; } +.@{fa-css-prefix}-paragraph:before { content: @fa-var-paragraph; } +.@{fa-css-prefix}-sliders:before { content: @fa-var-sliders; } +.@{fa-css-prefix}-share-alt:before { content: @fa-var-share-alt; } +.@{fa-css-prefix}-share-alt-square:before { content: @fa-var-share-alt-square; } +.@{fa-css-prefix}-bomb:before { content: @fa-var-bomb; } +.@{fa-css-prefix}-soccer-ball-o:before, +.@{fa-css-prefix}-futbol-o:before { content: @fa-var-futbol-o; } +.@{fa-css-prefix}-tty:before { content: @fa-var-tty; } +.@{fa-css-prefix}-binoculars:before { content: @fa-var-binoculars; } +.@{fa-css-prefix}-plug:before { content: @fa-var-plug; } +.@{fa-css-prefix}-slideshare:before { content: @fa-var-slideshare; } +.@{fa-css-prefix}-twitch:before { content: @fa-var-twitch; } +.@{fa-css-prefix}-yelp:before { content: @fa-var-yelp; } +.@{fa-css-prefix}-newspaper-o:before { content: @fa-var-newspaper-o; } +.@{fa-css-prefix}-wifi:before { content: @fa-var-wifi; } +.@{fa-css-prefix}-calculator:before { content: @fa-var-calculator; } +.@{fa-css-prefix}-paypal:before { content: @fa-var-paypal; } +.@{fa-css-prefix}-google-wallet:before { content: @fa-var-google-wallet; } +.@{fa-css-prefix}-cc-visa:before { content: @fa-var-cc-visa; } +.@{fa-css-prefix}-cc-mastercard:before { content: @fa-var-cc-mastercard; } +.@{fa-css-prefix}-cc-discover:before { content: @fa-var-cc-discover; } +.@{fa-css-prefix}-cc-amex:before { content: @fa-var-cc-amex; } +.@{fa-css-prefix}-cc-paypal:before { content: @fa-var-cc-paypal; } +.@{fa-css-prefix}-cc-stripe:before { content: @fa-var-cc-stripe; } +.@{fa-css-prefix}-bell-slash:before { content: @fa-var-bell-slash; } +.@{fa-css-prefix}-bell-slash-o:before { content: @fa-var-bell-slash-o; } +.@{fa-css-prefix}-trash:before { content: @fa-var-trash; } +.@{fa-css-prefix}-copyright:before { content: @fa-var-copyright; } +.@{fa-css-prefix}-at:before { content: @fa-var-at; } +.@{fa-css-prefix}-eyedropper:before { content: @fa-var-eyedropper; } +.@{fa-css-prefix}-paint-brush:before { content: @fa-var-paint-brush; } +.@{fa-css-prefix}-birthday-cake:before { content: @fa-var-birthday-cake; } +.@{fa-css-prefix}-area-chart:before { content: @fa-var-area-chart; } +.@{fa-css-prefix}-pie-chart:before { content: @fa-var-pie-chart; } +.@{fa-css-prefix}-line-chart:before { content: @fa-var-line-chart; } +.@{fa-css-prefix}-lastfm:before { content: @fa-var-lastfm; } +.@{fa-css-prefix}-lastfm-square:before { content: @fa-var-lastfm-square; } +.@{fa-css-prefix}-toggle-off:before { content: @fa-var-toggle-off; } +.@{fa-css-prefix}-toggle-on:before { content: @fa-var-toggle-on; } +.@{fa-css-prefix}-bicycle:before { content: @fa-var-bicycle; } +.@{fa-css-prefix}-bus:before { content: @fa-var-bus; } +.@{fa-css-prefix}-ioxhost:before { content: @fa-var-ioxhost; } +.@{fa-css-prefix}-angellist:before { content: @fa-var-angellist; } +.@{fa-css-prefix}-cc:before { content: @fa-var-cc; } +.@{fa-css-prefix}-shekel:before, +.@{fa-css-prefix}-sheqel:before, +.@{fa-css-prefix}-ils:before { content: @fa-var-ils; } +.@{fa-css-prefix}-meanpath:before { content: @fa-var-meanpath; } +.@{fa-css-prefix}-buysellads:before { content: @fa-var-buysellads; } +.@{fa-css-prefix}-connectdevelop:before { content: @fa-var-connectdevelop; } +.@{fa-css-prefix}-dashcube:before { content: @fa-var-dashcube; } +.@{fa-css-prefix}-forumbee:before { content: @fa-var-forumbee; } +.@{fa-css-prefix}-leanpub:before { content: @fa-var-leanpub; } +.@{fa-css-prefix}-sellsy:before { content: @fa-var-sellsy; } +.@{fa-css-prefix}-shirtsinbulk:before { content: @fa-var-shirtsinbulk; } +.@{fa-css-prefix}-simplybuilt:before { content: @fa-var-simplybuilt; } +.@{fa-css-prefix}-skyatlas:before { content: @fa-var-skyatlas; } +.@{fa-css-prefix}-cart-plus:before { content: @fa-var-cart-plus; } +.@{fa-css-prefix}-cart-arrow-down:before { content: @fa-var-cart-arrow-down; } +.@{fa-css-prefix}-diamond:before { content: @fa-var-diamond; } +.@{fa-css-prefix}-ship:before { content: @fa-var-ship; } +.@{fa-css-prefix}-user-secret:before { content: @fa-var-user-secret; } +.@{fa-css-prefix}-motorcycle:before { content: @fa-var-motorcycle; } +.@{fa-css-prefix}-street-view:before { content: @fa-var-street-view; } +.@{fa-css-prefix}-heartbeat:before { content: @fa-var-heartbeat; } +.@{fa-css-prefix}-venus:before { content: @fa-var-venus; } +.@{fa-css-prefix}-mars:before { content: @fa-var-mars; } +.@{fa-css-prefix}-mercury:before { content: @fa-var-mercury; } +.@{fa-css-prefix}-intersex:before, +.@{fa-css-prefix}-transgender:before { content: @fa-var-transgender; } +.@{fa-css-prefix}-transgender-alt:before { content: @fa-var-transgender-alt; } +.@{fa-css-prefix}-venus-double:before { content: @fa-var-venus-double; } +.@{fa-css-prefix}-mars-double:before { content: @fa-var-mars-double; } +.@{fa-css-prefix}-venus-mars:before { content: @fa-var-venus-mars; } +.@{fa-css-prefix}-mars-stroke:before { content: @fa-var-mars-stroke; } +.@{fa-css-prefix}-mars-stroke-v:before { content: @fa-var-mars-stroke-v; } +.@{fa-css-prefix}-mars-stroke-h:before { content: @fa-var-mars-stroke-h; } +.@{fa-css-prefix}-neuter:before { content: @fa-var-neuter; } +.@{fa-css-prefix}-genderless:before { content: @fa-var-genderless; } +.@{fa-css-prefix}-facebook-official:before { content: @fa-var-facebook-official; } +.@{fa-css-prefix}-pinterest-p:before { content: @fa-var-pinterest-p; } +.@{fa-css-prefix}-whatsapp:before { content: @fa-var-whatsapp; } +.@{fa-css-prefix}-server:before { content: @fa-var-server; } +.@{fa-css-prefix}-user-plus:before { content: @fa-var-user-plus; } +.@{fa-css-prefix}-user-times:before { content: @fa-var-user-times; } +.@{fa-css-prefix}-hotel:before, +.@{fa-css-prefix}-bed:before { content: @fa-var-bed; } +.@{fa-css-prefix}-viacoin:before { content: @fa-var-viacoin; } +.@{fa-css-prefix}-train:before { content: @fa-var-train; } +.@{fa-css-prefix}-subway:before { content: @fa-var-subway; } +.@{fa-css-prefix}-medium:before { content: @fa-var-medium; } +.@{fa-css-prefix}-yc:before, +.@{fa-css-prefix}-y-combinator:before { content: @fa-var-y-combinator; } +.@{fa-css-prefix}-optin-monster:before { content: @fa-var-optin-monster; } +.@{fa-css-prefix}-opencart:before { content: @fa-var-opencart; } +.@{fa-css-prefix}-expeditedssl:before { content: @fa-var-expeditedssl; } +.@{fa-css-prefix}-battery-4:before, +.@{fa-css-prefix}-battery:before, +.@{fa-css-prefix}-battery-full:before { content: @fa-var-battery-full; } +.@{fa-css-prefix}-battery-3:before, +.@{fa-css-prefix}-battery-three-quarters:before { content: @fa-var-battery-three-quarters; } +.@{fa-css-prefix}-battery-2:before, +.@{fa-css-prefix}-battery-half:before { content: @fa-var-battery-half; } +.@{fa-css-prefix}-battery-1:before, +.@{fa-css-prefix}-battery-quarter:before { content: @fa-var-battery-quarter; } +.@{fa-css-prefix}-battery-0:before, +.@{fa-css-prefix}-battery-empty:before { content: @fa-var-battery-empty; } +.@{fa-css-prefix}-mouse-pointer:before { content: @fa-var-mouse-pointer; } +.@{fa-css-prefix}-i-cursor:before { content: @fa-var-i-cursor; } +.@{fa-css-prefix}-object-group:before { content: @fa-var-object-group; } +.@{fa-css-prefix}-object-ungroup:before { content: @fa-var-object-ungroup; } +.@{fa-css-prefix}-sticky-note:before { content: @fa-var-sticky-note; } +.@{fa-css-prefix}-sticky-note-o:before { content: @fa-var-sticky-note-o; } +.@{fa-css-prefix}-cc-jcb:before { content: @fa-var-cc-jcb; } +.@{fa-css-prefix}-cc-diners-club:before { content: @fa-var-cc-diners-club; } +.@{fa-css-prefix}-clone:before { content: @fa-var-clone; } +.@{fa-css-prefix}-balance-scale:before { content: @fa-var-balance-scale; } +.@{fa-css-prefix}-hourglass-o:before { content: @fa-var-hourglass-o; } +.@{fa-css-prefix}-hourglass-1:before, +.@{fa-css-prefix}-hourglass-start:before { content: @fa-var-hourglass-start; } +.@{fa-css-prefix}-hourglass-2:before, +.@{fa-css-prefix}-hourglass-half:before { content: @fa-var-hourglass-half; } +.@{fa-css-prefix}-hourglass-3:before, +.@{fa-css-prefix}-hourglass-end:before { content: @fa-var-hourglass-end; } +.@{fa-css-prefix}-hourglass:before { content: @fa-var-hourglass; } +.@{fa-css-prefix}-hand-grab-o:before, +.@{fa-css-prefix}-hand-rock-o:before { content: @fa-var-hand-rock-o; } +.@{fa-css-prefix}-hand-stop-o:before, +.@{fa-css-prefix}-hand-paper-o:before { content: @fa-var-hand-paper-o; } +.@{fa-css-prefix}-hand-scissors-o:before { content: @fa-var-hand-scissors-o; } +.@{fa-css-prefix}-hand-lizard-o:before { content: @fa-var-hand-lizard-o; } +.@{fa-css-prefix}-hand-spock-o:before { content: @fa-var-hand-spock-o; } +.@{fa-css-prefix}-hand-pointer-o:before { content: @fa-var-hand-pointer-o; } +.@{fa-css-prefix}-hand-peace-o:before { content: @fa-var-hand-peace-o; } +.@{fa-css-prefix}-trademark:before { content: @fa-var-trademark; } +.@{fa-css-prefix}-registered:before { content: @fa-var-registered; } +.@{fa-css-prefix}-creative-commons:before { content: @fa-var-creative-commons; } +.@{fa-css-prefix}-gg:before { content: @fa-var-gg; } +.@{fa-css-prefix}-gg-circle:before { content: @fa-var-gg-circle; } +.@{fa-css-prefix}-tripadvisor:before { content: @fa-var-tripadvisor; } +.@{fa-css-prefix}-odnoklassniki:before { content: @fa-var-odnoklassniki; } +.@{fa-css-prefix}-odnoklassniki-square:before { content: @fa-var-odnoklassniki-square; } +.@{fa-css-prefix}-get-pocket:before { content: @fa-var-get-pocket; } +.@{fa-css-prefix}-wikipedia-w:before { content: @fa-var-wikipedia-w; } +.@{fa-css-prefix}-safari:before { content: @fa-var-safari; } +.@{fa-css-prefix}-chrome:before { content: @fa-var-chrome; } +.@{fa-css-prefix}-firefox:before { content: @fa-var-firefox; } +.@{fa-css-prefix}-opera:before { content: @fa-var-opera; } +.@{fa-css-prefix}-internet-explorer:before { content: @fa-var-internet-explorer; } +.@{fa-css-prefix}-tv:before, +.@{fa-css-prefix}-television:before { content: @fa-var-television; } +.@{fa-css-prefix}-contao:before { content: @fa-var-contao; } +.@{fa-css-prefix}-500px:before { content: @fa-var-500px; } +.@{fa-css-prefix}-amazon:before { content: @fa-var-amazon; } +.@{fa-css-prefix}-calendar-plus-o:before { content: @fa-var-calendar-plus-o; } +.@{fa-css-prefix}-calendar-minus-o:before { content: @fa-var-calendar-minus-o; } +.@{fa-css-prefix}-calendar-times-o:before { content: @fa-var-calendar-times-o; } +.@{fa-css-prefix}-calendar-check-o:before { content: @fa-var-calendar-check-o; } +.@{fa-css-prefix}-industry:before { content: @fa-var-industry; } +.@{fa-css-prefix}-map-pin:before { content: @fa-var-map-pin; } +.@{fa-css-prefix}-map-signs:before { content: @fa-var-map-signs; } +.@{fa-css-prefix}-map-o:before { content: @fa-var-map-o; } +.@{fa-css-prefix}-map:before { content: @fa-var-map; } +.@{fa-css-prefix}-commenting:before { content: @fa-var-commenting; } +.@{fa-css-prefix}-commenting-o:before { content: @fa-var-commenting-o; } +.@{fa-css-prefix}-houzz:before { content: @fa-var-houzz; } +.@{fa-css-prefix}-vimeo:before { content: @fa-var-vimeo; } +.@{fa-css-prefix}-black-tie:before { content: @fa-var-black-tie; } +.@{fa-css-prefix}-fonticons:before { content: @fa-var-fonticons; } +.@{fa-css-prefix}-reddit-alien:before { content: @fa-var-reddit-alien; } +.@{fa-css-prefix}-edge:before { content: @fa-var-edge; } +.@{fa-css-prefix}-credit-card-alt:before { content: @fa-var-credit-card-alt; } +.@{fa-css-prefix}-codiepie:before { content: @fa-var-codiepie; } +.@{fa-css-prefix}-modx:before { content: @fa-var-modx; } +.@{fa-css-prefix}-fort-awesome:before { content: @fa-var-fort-awesome; } +.@{fa-css-prefix}-usb:before { content: @fa-var-usb; } +.@{fa-css-prefix}-product-hunt:before { content: @fa-var-product-hunt; } +.@{fa-css-prefix}-mixcloud:before { content: @fa-var-mixcloud; } +.@{fa-css-prefix}-scribd:before { content: @fa-var-scribd; } +.@{fa-css-prefix}-pause-circle:before { content: @fa-var-pause-circle; } +.@{fa-css-prefix}-pause-circle-o:before { content: @fa-var-pause-circle-o; } +.@{fa-css-prefix}-stop-circle:before { content: @fa-var-stop-circle; } +.@{fa-css-prefix}-stop-circle-o:before { content: @fa-var-stop-circle-o; } +.@{fa-css-prefix}-shopping-bag:before { content: @fa-var-shopping-bag; } +.@{fa-css-prefix}-shopping-basket:before { content: @fa-var-shopping-basket; } +.@{fa-css-prefix}-hashtag:before { content: @fa-var-hashtag; } +.@{fa-css-prefix}-bluetooth:before { content: @fa-var-bluetooth; } +.@{fa-css-prefix}-bluetooth-b:before { content: @fa-var-bluetooth-b; } +.@{fa-css-prefix}-percent:before { content: @fa-var-percent; } +.@{fa-css-prefix}-gitlab:before { content: @fa-var-gitlab; } +.@{fa-css-prefix}-wpbeginner:before { content: @fa-var-wpbeginner; } +.@{fa-css-prefix}-wpforms:before { content: @fa-var-wpforms; } +.@{fa-css-prefix}-envira:before { content: @fa-var-envira; } +.@{fa-css-prefix}-universal-access:before { content: @fa-var-universal-access; } +.@{fa-css-prefix}-wheelchair-alt:before { content: @fa-var-wheelchair-alt; } +.@{fa-css-prefix}-question-circle-o:before { content: @fa-var-question-circle-o; } +.@{fa-css-prefix}-blind:before { content: @fa-var-blind; } +.@{fa-css-prefix}-audio-description:before { content: @fa-var-audio-description; } +.@{fa-css-prefix}-volume-control-phone:before { content: @fa-var-volume-control-phone; } +.@{fa-css-prefix}-braille:before { content: @fa-var-braille; } +.@{fa-css-prefix}-assistive-listening-systems:before { content: @fa-var-assistive-listening-systems; } +.@{fa-css-prefix}-asl-interpreting:before, +.@{fa-css-prefix}-american-sign-language-interpreting:before { content: @fa-var-american-sign-language-interpreting; } +.@{fa-css-prefix}-deafness:before, +.@{fa-css-prefix}-hard-of-hearing:before, +.@{fa-css-prefix}-deaf:before { content: @fa-var-deaf; } +.@{fa-css-prefix}-glide:before { content: @fa-var-glide; } +.@{fa-css-prefix}-glide-g:before { content: @fa-var-glide-g; } +.@{fa-css-prefix}-signing:before, +.@{fa-css-prefix}-sign-language:before { content: @fa-var-sign-language; } +.@{fa-css-prefix}-low-vision:before { content: @fa-var-low-vision; } +.@{fa-css-prefix}-viadeo:before { content: @fa-var-viadeo; } +.@{fa-css-prefix}-viadeo-square:before { content: @fa-var-viadeo-square; } +.@{fa-css-prefix}-snapchat:before { content: @fa-var-snapchat; } +.@{fa-css-prefix}-snapchat-ghost:before { content: @fa-var-snapchat-ghost; } +.@{fa-css-prefix}-snapchat-square:before { content: @fa-var-snapchat-square; } +.@{fa-css-prefix}-pied-piper:before { content: @fa-var-pied-piper; } +.@{fa-css-prefix}-first-order:before { content: @fa-var-first-order; } +.@{fa-css-prefix}-yoast:before { content: @fa-var-yoast; } +.@{fa-css-prefix}-themeisle:before { content: @fa-var-themeisle; } +.@{fa-css-prefix}-google-plus-circle:before, +.@{fa-css-prefix}-google-plus-official:before { content: @fa-var-google-plus-official; } +.@{fa-css-prefix}-fa:before, +.@{fa-css-prefix}-font-awesome:before { content: @fa-var-font-awesome; } +.@{fa-css-prefix}-handshake-o:before { content: @fa-var-handshake-o; } +.@{fa-css-prefix}-envelope-open:before { content: @fa-var-envelope-open; } +.@{fa-css-prefix}-envelope-open-o:before { content: @fa-var-envelope-open-o; } +.@{fa-css-prefix}-linode:before { content: @fa-var-linode; } +.@{fa-css-prefix}-address-book:before { content: @fa-var-address-book; } +.@{fa-css-prefix}-address-book-o:before { content: @fa-var-address-book-o; } +.@{fa-css-prefix}-vcard:before, +.@{fa-css-prefix}-address-card:before { content: @fa-var-address-card; } +.@{fa-css-prefix}-vcard-o:before, +.@{fa-css-prefix}-address-card-o:before { content: @fa-var-address-card-o; } +.@{fa-css-prefix}-user-circle:before { content: @fa-var-user-circle; } +.@{fa-css-prefix}-user-circle-o:before { content: @fa-var-user-circle-o; } +.@{fa-css-prefix}-user-o:before { content: @fa-var-user-o; } +.@{fa-css-prefix}-id-badge:before { content: @fa-var-id-badge; } +.@{fa-css-prefix}-drivers-license:before, +.@{fa-css-prefix}-id-card:before { content: @fa-var-id-card; } +.@{fa-css-prefix}-drivers-license-o:before, +.@{fa-css-prefix}-id-card-o:before { content: @fa-var-id-card-o; } +.@{fa-css-prefix}-quora:before { content: @fa-var-quora; } +.@{fa-css-prefix}-free-code-camp:before { content: @fa-var-free-code-camp; } +.@{fa-css-prefix}-telegram:before { content: @fa-var-telegram; } +.@{fa-css-prefix}-thermometer-4:before, +.@{fa-css-prefix}-thermometer:before, +.@{fa-css-prefix}-thermometer-full:before { content: @fa-var-thermometer-full; } +.@{fa-css-prefix}-thermometer-3:before, +.@{fa-css-prefix}-thermometer-three-quarters:before { content: @fa-var-thermometer-three-quarters; } +.@{fa-css-prefix}-thermometer-2:before, +.@{fa-css-prefix}-thermometer-half:before { content: @fa-var-thermometer-half; } +.@{fa-css-prefix}-thermometer-1:before, +.@{fa-css-prefix}-thermometer-quarter:before { content: @fa-var-thermometer-quarter; } +.@{fa-css-prefix}-thermometer-0:before, +.@{fa-css-prefix}-thermometer-empty:before { content: @fa-var-thermometer-empty; } +.@{fa-css-prefix}-shower:before { content: @fa-var-shower; } +.@{fa-css-prefix}-bathtub:before, +.@{fa-css-prefix}-s15:before, +.@{fa-css-prefix}-bath:before { content: @fa-var-bath; } +.@{fa-css-prefix}-podcast:before { content: @fa-var-podcast; } +.@{fa-css-prefix}-window-maximize:before { content: @fa-var-window-maximize; } +.@{fa-css-prefix}-window-minimize:before { content: @fa-var-window-minimize; } +.@{fa-css-prefix}-window-restore:before { content: @fa-var-window-restore; } +.@{fa-css-prefix}-times-rectangle:before, +.@{fa-css-prefix}-window-close:before { content: @fa-var-window-close; } +.@{fa-css-prefix}-times-rectangle-o:before, +.@{fa-css-prefix}-window-close-o:before { content: @fa-var-window-close-o; } +.@{fa-css-prefix}-bandcamp:before { content: @fa-var-bandcamp; } +.@{fa-css-prefix}-grav:before { content: @fa-var-grav; } +.@{fa-css-prefix}-etsy:before { content: @fa-var-etsy; } +.@{fa-css-prefix}-imdb:before { content: @fa-var-imdb; } +.@{fa-css-prefix}-ravelry:before { content: @fa-var-ravelry; } +.@{fa-css-prefix}-eercast:before { content: @fa-var-eercast; } +.@{fa-css-prefix}-microchip:before { content: @fa-var-microchip; } +.@{fa-css-prefix}-snowflake-o:before { content: @fa-var-snowflake-o; } +.@{fa-css-prefix}-superpowers:before { content: @fa-var-superpowers; } +.@{fa-css-prefix}-wpexplorer:before { content: @fa-var-wpexplorer; } +.@{fa-css-prefix}-meetup:before { content: @fa-var-meetup; } diff --git a/_site/site/public/font-awesome-4.7.0/less/larger.less b/_site/site/public/font-awesome-4.7.0/less/larger.less new file mode 100755 index 00000000..c9d64677 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/larger.less @@ -0,0 +1,13 @@ +// Icon Sizes +// ------------------------- + +/* makes the font 33% larger relative to the icon container */ +.@{fa-css-prefix}-lg { + font-size: (4em / 3); + line-height: (3em / 4); + vertical-align: -15%; +} +.@{fa-css-prefix}-2x { font-size: 2em; } +.@{fa-css-prefix}-3x { font-size: 3em; } +.@{fa-css-prefix}-4x { font-size: 4em; } +.@{fa-css-prefix}-5x { font-size: 5em; } diff --git a/_site/site/public/font-awesome-4.7.0/less/list.less b/_site/site/public/font-awesome-4.7.0/less/list.less new file mode 100755 index 00000000..0b440382 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/list.less @@ -0,0 +1,19 @@ +// List Icons +// ------------------------- + +.@{fa-css-prefix}-ul { + padding-left: 0; + margin-left: @fa-li-width; + list-style-type: none; + > li { position: relative; } +} +.@{fa-css-prefix}-li { + position: absolute; + left: -@fa-li-width; + width: @fa-li-width; + top: (2em / 14); + text-align: center; + &.@{fa-css-prefix}-lg { + left: (-@fa-li-width + (4em / 14)); + } +} diff --git a/_site/site/public/font-awesome-4.7.0/less/mixins.less b/_site/site/public/font-awesome-4.7.0/less/mixins.less new file mode 100755 index 00000000..beef231d --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/mixins.less @@ -0,0 +1,60 @@ +// Mixins +// -------------------------- + +.fa-icon() { + display: inline-block; + font: normal normal normal @fa-font-size-base/@fa-line-height-base FontAwesome; // shortening font declaration + font-size: inherit; // can't have font-size inherit on line above, so need to override + text-rendering: auto; // optimizelegibility throws things off #1094 + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; + +} + +.fa-icon-rotate(@degrees, @rotation) { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=@{rotation})"; + -webkit-transform: rotate(@degrees); + -ms-transform: rotate(@degrees); + transform: rotate(@degrees); +} + +.fa-icon-flip(@horiz, @vert, @rotation) { + -ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=@{rotation}, mirror=1)"; + -webkit-transform: scale(@horiz, @vert); + -ms-transform: scale(@horiz, @vert); + transform: scale(@horiz, @vert); +} + + +// Only display content to screen readers. A la Bootstrap 4. +// +// See: http://a11yproject.com/posts/how-to-hide-content/ + +.sr-only() { + position: absolute; + width: 1px; + height: 1px; + padding: 0; + margin: -1px; + overflow: hidden; + clip: rect(0,0,0,0); + border: 0; +} + +// Use in conjunction with .sr-only to only display content when it's focused. +// +// Useful for "Skip to main content" links; see http://www.w3.org/TR/2013/NOTE-WCAG20-TECHS-20130905/G1 +// +// Credit: HTML5 Boilerplate + +.sr-only-focusable() { + &:active, + &:focus { + position: static; + width: auto; + height: auto; + margin: 0; + overflow: visible; + clip: auto; + } +} diff --git a/_site/site/public/font-awesome-4.7.0/less/path.less b/_site/site/public/font-awesome-4.7.0/less/path.less new file mode 100755 index 00000000..835be41f --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/path.less @@ -0,0 +1,15 @@ +/* FONT PATH + * -------------------------- */ + +@font-face { + font-family: 'FontAwesome'; + src: url('@{fa-font-path}/fontawesome-webfont.eot?v=@{fa-version}'); + src: url('@{fa-font-path}/fontawesome-webfont.eot?#iefix&v=@{fa-version}') format('embedded-opentype'), + url('@{fa-font-path}/fontawesome-webfont.woff2?v=@{fa-version}') format('woff2'), + url('@{fa-font-path}/fontawesome-webfont.woff?v=@{fa-version}') format('woff'), + url('@{fa-font-path}/fontawesome-webfont.ttf?v=@{fa-version}') format('truetype'), + url('@{fa-font-path}/fontawesome-webfont.svg?v=@{fa-version}#fontawesomeregular') format('svg'); + // src: url('@{fa-font-path}/FontAwesome.otf') format('opentype'); // used when developing fonts + font-weight: normal; + font-style: normal; +} diff --git a/_site/site/public/font-awesome-4.7.0/less/rotated-flipped.less b/_site/site/public/font-awesome-4.7.0/less/rotated-flipped.less new file mode 100755 index 00000000..f6ba8147 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/rotated-flipped.less @@ -0,0 +1,20 @@ +// Rotated & Flipped Icons +// ------------------------- + +.@{fa-css-prefix}-rotate-90 { .fa-icon-rotate(90deg, 1); } +.@{fa-css-prefix}-rotate-180 { .fa-icon-rotate(180deg, 2); } +.@{fa-css-prefix}-rotate-270 { .fa-icon-rotate(270deg, 3); } + +.@{fa-css-prefix}-flip-horizontal { .fa-icon-flip(-1, 1, 0); } +.@{fa-css-prefix}-flip-vertical { .fa-icon-flip(1, -1, 2); } + +// Hook for IE8-9 +// ------------------------- + +:root .@{fa-css-prefix}-rotate-90, +:root .@{fa-css-prefix}-rotate-180, +:root .@{fa-css-prefix}-rotate-270, +:root .@{fa-css-prefix}-flip-horizontal, +:root .@{fa-css-prefix}-flip-vertical { + filter: none; +} diff --git a/_site/site/public/font-awesome-4.7.0/less/screen-reader.less b/_site/site/public/font-awesome-4.7.0/less/screen-reader.less new file mode 100755 index 00000000..11c18819 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/screen-reader.less @@ -0,0 +1,5 @@ +// Screen Readers +// ------------------------- + +.sr-only { .sr-only(); } +.sr-only-focusable { .sr-only-focusable(); } diff --git a/_site/site/public/font-awesome-4.7.0/less/stacked.less b/_site/site/public/font-awesome-4.7.0/less/stacked.less new file mode 100755 index 00000000..fc53fb0e --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/stacked.less @@ -0,0 +1,20 @@ +// Stacked Icons +// ------------------------- + +.@{fa-css-prefix}-stack { + position: relative; + display: inline-block; + width: 2em; + height: 2em; + line-height: 2em; + vertical-align: middle; +} +.@{fa-css-prefix}-stack-1x, .@{fa-css-prefix}-stack-2x { + position: absolute; + left: 0; + width: 100%; + text-align: center; +} +.@{fa-css-prefix}-stack-1x { line-height: inherit; } +.@{fa-css-prefix}-stack-2x { font-size: 2em; } +.@{fa-css-prefix}-inverse { color: @fa-inverse; } diff --git a/_site/site/public/font-awesome-4.7.0/less/variables.less b/_site/site/public/font-awesome-4.7.0/less/variables.less new file mode 100755 index 00000000..7ddbbc01 --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/less/variables.less @@ -0,0 +1,800 @@ +// Variables +// -------------------------- + +@fa-font-path: "../fonts"; +@fa-font-size-base: 14px; +@fa-line-height-base: 1; +//@fa-font-path: "//netdna.bootstrapcdn.com/font-awesome/4.7.0/fonts"; // for referencing Bootstrap CDN font files directly +@fa-css-prefix: fa; +@fa-version: "4.7.0"; +@fa-border-color: #eee; +@fa-inverse: #fff; +@fa-li-width: (30em / 14); + +@fa-var-500px: "\f26e"; +@fa-var-address-book: "\f2b9"; +@fa-var-address-book-o: "\f2ba"; +@fa-var-address-card: "\f2bb"; +@fa-var-address-card-o: "\f2bc"; +@fa-var-adjust: "\f042"; +@fa-var-adn: "\f170"; +@fa-var-align-center: "\f037"; +@fa-var-align-justify: "\f039"; +@fa-var-align-left: "\f036"; +@fa-var-align-right: "\f038"; +@fa-var-amazon: "\f270"; +@fa-var-ambulance: "\f0f9"; +@fa-var-american-sign-language-interpreting: "\f2a3"; +@fa-var-anchor: "\f13d"; +@fa-var-android: "\f17b"; +@fa-var-angellist: "\f209"; +@fa-var-angle-double-down: "\f103"; +@fa-var-angle-double-left: "\f100"; +@fa-var-angle-double-right: "\f101"; +@fa-var-angle-double-up: "\f102"; +@fa-var-angle-down: "\f107"; +@fa-var-angle-left: "\f104"; +@fa-var-angle-right: "\f105"; +@fa-var-angle-up: "\f106"; +@fa-var-apple: "\f179"; +@fa-var-archive: "\f187"; +@fa-var-area-chart: "\f1fe"; +@fa-var-arrow-circle-down: "\f0ab"; +@fa-var-arrow-circle-left: "\f0a8"; +@fa-var-arrow-circle-o-down: "\f01a"; +@fa-var-arrow-circle-o-left: "\f190"; +@fa-var-arrow-circle-o-right: "\f18e"; +@fa-var-arrow-circle-o-up: "\f01b"; +@fa-var-arrow-circle-right: "\f0a9"; +@fa-var-arrow-circle-up: "\f0aa"; +@fa-var-arrow-down: "\f063"; +@fa-var-arrow-left: "\f060"; +@fa-var-arrow-right: "\f061"; +@fa-var-arrow-up: "\f062"; +@fa-var-arrows: "\f047"; +@fa-var-arrows-alt: "\f0b2"; +@fa-var-arrows-h: "\f07e"; +@fa-var-arrows-v: "\f07d"; +@fa-var-asl-interpreting: "\f2a3"; +@fa-var-assistive-listening-systems: "\f2a2"; +@fa-var-asterisk: "\f069"; +@fa-var-at: "\f1fa"; +@fa-var-audio-description: "\f29e"; +@fa-var-automobile: "\f1b9"; +@fa-var-backward: "\f04a"; +@fa-var-balance-scale: "\f24e"; +@fa-var-ban: "\f05e"; +@fa-var-bandcamp: "\f2d5"; +@fa-var-bank: "\f19c"; +@fa-var-bar-chart: "\f080"; +@fa-var-bar-chart-o: "\f080"; +@fa-var-barcode: "\f02a"; +@fa-var-bars: "\f0c9"; +@fa-var-bath: "\f2cd"; +@fa-var-bathtub: "\f2cd"; +@fa-var-battery: "\f240"; +@fa-var-battery-0: "\f244"; +@fa-var-battery-1: "\f243"; +@fa-var-battery-2: "\f242"; +@fa-var-battery-3: "\f241"; +@fa-var-battery-4: "\f240"; +@fa-var-battery-empty: "\f244"; +@fa-var-battery-full: "\f240"; +@fa-var-battery-half: "\f242"; +@fa-var-battery-quarter: "\f243"; +@fa-var-battery-three-quarters: "\f241"; +@fa-var-bed: "\f236"; +@fa-var-beer: "\f0fc"; +@fa-var-behance: "\f1b4"; +@fa-var-behance-square: "\f1b5"; +@fa-var-bell: "\f0f3"; +@fa-var-bell-o: "\f0a2"; +@fa-var-bell-slash: "\f1f6"; +@fa-var-bell-slash-o: "\f1f7"; +@fa-var-bicycle: "\f206"; +@fa-var-binoculars: "\f1e5"; +@fa-var-birthday-cake: "\f1fd"; +@fa-var-bitbucket: "\f171"; +@fa-var-bitbucket-square: "\f172"; +@fa-var-bitcoin: "\f15a"; +@fa-var-black-tie: "\f27e"; +@fa-var-blind: "\f29d"; +@fa-var-bluetooth: "\f293"; +@fa-var-bluetooth-b: "\f294"; +@fa-var-bold: "\f032"; +@fa-var-bolt: "\f0e7"; +@fa-var-bomb: "\f1e2"; +@fa-var-book: "\f02d"; +@fa-var-bookmark: "\f02e"; +@fa-var-bookmark-o: "\f097"; +@fa-var-braille: "\f2a1"; +@fa-var-briefcase: "\f0b1"; +@fa-var-btc: "\f15a"; +@fa-var-bug: "\f188"; +@fa-var-building: "\f1ad"; +@fa-var-building-o: "\f0f7"; +@fa-var-bullhorn: "\f0a1"; +@fa-var-bullseye: "\f140"; +@fa-var-bus: "\f207"; +@fa-var-buysellads: "\f20d"; +@fa-var-cab: "\f1ba"; +@fa-var-calculator: "\f1ec"; +@fa-var-calendar: "\f073"; +@fa-var-calendar-check-o: "\f274"; +@fa-var-calendar-minus-o: "\f272"; +@fa-var-calendar-o: "\f133"; +@fa-var-calendar-plus-o: "\f271"; +@fa-var-calendar-times-o: "\f273"; +@fa-var-camera: "\f030"; +@fa-var-camera-retro: "\f083"; +@fa-var-car: "\f1b9"; +@fa-var-caret-down: "\f0d7"; +@fa-var-caret-left: "\f0d9"; +@fa-var-caret-right: "\f0da"; +@fa-var-caret-square-o-down: "\f150"; +@fa-var-caret-square-o-left: "\f191"; +@fa-var-caret-square-o-right: "\f152"; +@fa-var-caret-square-o-up: "\f151"; +@fa-var-caret-up: "\f0d8"; +@fa-var-cart-arrow-down: "\f218"; +@fa-var-cart-plus: "\f217"; +@fa-var-cc: "\f20a"; +@fa-var-cc-amex: "\f1f3"; +@fa-var-cc-diners-club: "\f24c"; +@fa-var-cc-discover: "\f1f2"; +@fa-var-cc-jcb: "\f24b"; +@fa-var-cc-mastercard: "\f1f1"; +@fa-var-cc-paypal: "\f1f4"; +@fa-var-cc-stripe: "\f1f5"; +@fa-var-cc-visa: "\f1f0"; +@fa-var-certificate: "\f0a3"; +@fa-var-chain: "\f0c1"; +@fa-var-chain-broken: "\f127"; +@fa-var-check: "\f00c"; +@fa-var-check-circle: "\f058"; +@fa-var-check-circle-o: "\f05d"; +@fa-var-check-square: "\f14a"; +@fa-var-check-square-o: "\f046"; +@fa-var-chevron-circle-down: "\f13a"; +@fa-var-chevron-circle-left: "\f137"; +@fa-var-chevron-circle-right: "\f138"; +@fa-var-chevron-circle-up: "\f139"; +@fa-var-chevron-down: "\f078"; +@fa-var-chevron-left: "\f053"; +@fa-var-chevron-right: "\f054"; +@fa-var-chevron-up: "\f077"; +@fa-var-child: "\f1ae"; +@fa-var-chrome: "\f268"; +@fa-var-circle: "\f111"; +@fa-var-circle-o: "\f10c"; +@fa-var-circle-o-notch: "\f1ce"; +@fa-var-circle-thin: "\f1db"; +@fa-var-clipboard: "\f0ea"; +@fa-var-clock-o: "\f017"; +@fa-var-clone: "\f24d"; +@fa-var-close: "\f00d"; +@fa-var-cloud: "\f0c2"; +@fa-var-cloud-download: "\f0ed"; +@fa-var-cloud-upload: "\f0ee"; +@fa-var-cny: "\f157"; +@fa-var-code: "\f121"; +@fa-var-code-fork: "\f126"; +@fa-var-codepen: "\f1cb"; +@fa-var-codiepie: "\f284"; +@fa-var-coffee: "\f0f4"; +@fa-var-cog: "\f013"; +@fa-var-cogs: "\f085"; +@fa-var-columns: "\f0db"; +@fa-var-comment: "\f075"; +@fa-var-comment-o: "\f0e5"; +@fa-var-commenting: "\f27a"; +@fa-var-commenting-o: "\f27b"; +@fa-var-comments: "\f086"; +@fa-var-comments-o: "\f0e6"; +@fa-var-compass: "\f14e"; +@fa-var-compress: "\f066"; +@fa-var-connectdevelop: "\f20e"; +@fa-var-contao: "\f26d"; +@fa-var-copy: "\f0c5"; +@fa-var-copyright: "\f1f9"; +@fa-var-creative-commons: "\f25e"; +@fa-var-credit-card: "\f09d"; +@fa-var-credit-card-alt: "\f283"; +@fa-var-crop: "\f125"; +@fa-var-crosshairs: "\f05b"; +@fa-var-css3: "\f13c"; +@fa-var-cube: "\f1b2"; +@fa-var-cubes: "\f1b3"; +@fa-var-cut: "\f0c4"; +@fa-var-cutlery: "\f0f5"; +@fa-var-dashboard: "\f0e4"; +@fa-var-dashcube: "\f210"; +@fa-var-database: "\f1c0"; +@fa-var-deaf: "\f2a4"; +@fa-var-deafness: "\f2a4"; +@fa-var-dedent: "\f03b"; +@fa-var-delicious: "\f1a5"; +@fa-var-desktop: "\f108"; +@fa-var-deviantart: "\f1bd"; +@fa-var-diamond: "\f219"; +@fa-var-digg: "\f1a6"; +@fa-var-dollar: "\f155"; +@fa-var-dot-circle-o: "\f192"; +@fa-var-download: "\f019"; +@fa-var-dribbble: "\f17d"; +@fa-var-drivers-license: "\f2c2"; +@fa-var-drivers-license-o: "\f2c3"; +@fa-var-dropbox: "\f16b"; +@fa-var-drupal: "\f1a9"; +@fa-var-edge: "\f282"; +@fa-var-edit: "\f044"; +@fa-var-eercast: "\f2da"; +@fa-var-eject: "\f052"; +@fa-var-ellipsis-h: "\f141"; +@fa-var-ellipsis-v: "\f142"; +@fa-var-empire: "\f1d1"; +@fa-var-envelope: "\f0e0"; +@fa-var-envelope-o: "\f003"; +@fa-var-envelope-open: "\f2b6"; +@fa-var-envelope-open-o: "\f2b7"; +@fa-var-envelope-square: "\f199"; +@fa-var-envira: "\f299"; +@fa-var-eraser: "\f12d"; +@fa-var-etsy: "\f2d7"; +@fa-var-eur: "\f153"; +@fa-var-euro: "\f153"; +@fa-var-exchange: "\f0ec"; +@fa-var-exclamation: "\f12a"; +@fa-var-exclamation-circle: "\f06a"; +@fa-var-exclamation-triangle: "\f071"; +@fa-var-expand: "\f065"; +@fa-var-expeditedssl: "\f23e"; +@fa-var-external-link: "\f08e"; +@fa-var-external-link-square: "\f14c"; +@fa-var-eye: "\f06e"; +@fa-var-eye-slash: "\f070"; +@fa-var-eyedropper: "\f1fb"; +@fa-var-fa: "\f2b4"; +@fa-var-facebook: "\f09a"; +@fa-var-facebook-f: "\f09a"; +@fa-var-facebook-official: "\f230"; +@fa-var-facebook-square: "\f082"; +@fa-var-fast-backward: "\f049"; +@fa-var-fast-forward: "\f050"; +@fa-var-fax: "\f1ac"; +@fa-var-feed: "\f09e"; +@fa-var-female: "\f182"; +@fa-var-fighter-jet: "\f0fb"; +@fa-var-file: "\f15b"; +@fa-var-file-archive-o: "\f1c6"; +@fa-var-file-audio-o: "\f1c7"; +@fa-var-file-code-o: "\f1c9"; +@fa-var-file-excel-o: "\f1c3"; +@fa-var-file-image-o: "\f1c5"; +@fa-var-file-movie-o: "\f1c8"; +@fa-var-file-o: "\f016"; +@fa-var-file-pdf-o: "\f1c1"; +@fa-var-file-photo-o: "\f1c5"; +@fa-var-file-picture-o: "\f1c5"; +@fa-var-file-powerpoint-o: "\f1c4"; +@fa-var-file-sound-o: "\f1c7"; +@fa-var-file-text: "\f15c"; +@fa-var-file-text-o: "\f0f6"; +@fa-var-file-video-o: "\f1c8"; +@fa-var-file-word-o: "\f1c2"; +@fa-var-file-zip-o: "\f1c6"; +@fa-var-files-o: "\f0c5"; +@fa-var-film: "\f008"; +@fa-var-filter: "\f0b0"; +@fa-var-fire: "\f06d"; +@fa-var-fire-extinguisher: "\f134"; +@fa-var-firefox: "\f269"; +@fa-var-first-order: "\f2b0"; +@fa-var-flag: "\f024"; +@fa-var-flag-checkered: "\f11e"; +@fa-var-flag-o: "\f11d"; +@fa-var-flash: "\f0e7"; +@fa-var-flask: "\f0c3"; +@fa-var-flickr: "\f16e"; +@fa-var-floppy-o: "\f0c7"; +@fa-var-folder: "\f07b"; +@fa-var-folder-o: "\f114"; +@fa-var-folder-open: "\f07c"; +@fa-var-folder-open-o: "\f115"; +@fa-var-font: "\f031"; +@fa-var-font-awesome: "\f2b4"; +@fa-var-fonticons: "\f280"; +@fa-var-fort-awesome: "\f286"; +@fa-var-forumbee: "\f211"; +@fa-var-forward: "\f04e"; +@fa-var-foursquare: "\f180"; +@fa-var-free-code-camp: "\f2c5"; +@fa-var-frown-o: "\f119"; +@fa-var-futbol-o: "\f1e3"; +@fa-var-gamepad: "\f11b"; +@fa-var-gavel: "\f0e3"; +@fa-var-gbp: "\f154"; +@fa-var-ge: "\f1d1"; +@fa-var-gear: "\f013"; +@fa-var-gears: "\f085"; +@fa-var-genderless: "\f22d"; +@fa-var-get-pocket: "\f265"; +@fa-var-gg: "\f260"; +@fa-var-gg-circle: "\f261"; +@fa-var-gift: "\f06b"; +@fa-var-git: "\f1d3"; +@fa-var-git-square: "\f1d2"; +@fa-var-github: "\f09b"; +@fa-var-github-alt: "\f113"; +@fa-var-github-square: "\f092"; +@fa-var-gitlab: "\f296"; +@fa-var-gittip: "\f184"; +@fa-var-glass: "\f000"; +@fa-var-glide: "\f2a5"; +@fa-var-glide-g: "\f2a6"; +@fa-var-globe: "\f0ac"; +@fa-var-google: "\f1a0"; +@fa-var-google-plus: "\f0d5"; +@fa-var-google-plus-circle: "\f2b3"; +@fa-var-google-plus-official: "\f2b3"; +@fa-var-google-plus-square: "\f0d4"; +@fa-var-google-wallet: "\f1ee"; +@fa-var-graduation-cap: "\f19d"; +@fa-var-gratipay: "\f184"; +@fa-var-grav: "\f2d6"; +@fa-var-group: "\f0c0"; +@fa-var-h-square: "\f0fd"; +@fa-var-hacker-news: "\f1d4"; +@fa-var-hand-grab-o: "\f255"; +@fa-var-hand-lizard-o: "\f258"; +@fa-var-hand-o-down: "\f0a7"; +@fa-var-hand-o-left: "\f0a5"; +@fa-var-hand-o-right: "\f0a4"; +@fa-var-hand-o-up: "\f0a6"; +@fa-var-hand-paper-o: "\f256"; +@fa-var-hand-peace-o: "\f25b"; +@fa-var-hand-pointer-o: "\f25a"; +@fa-var-hand-rock-o: "\f255"; +@fa-var-hand-scissors-o: "\f257"; +@fa-var-hand-spock-o: "\f259"; +@fa-var-hand-stop-o: "\f256"; +@fa-var-handshake-o: "\f2b5"; +@fa-var-hard-of-hearing: "\f2a4"; +@fa-var-hashtag: "\f292"; +@fa-var-hdd-o: "\f0a0"; +@fa-var-header: "\f1dc"; +@fa-var-headphones: "\f025"; +@fa-var-heart: "\f004"; +@fa-var-heart-o: "\f08a"; +@fa-var-heartbeat: "\f21e"; +@fa-var-history: "\f1da"; +@fa-var-home: "\f015"; +@fa-var-hospital-o: "\f0f8"; +@fa-var-hotel: "\f236"; +@fa-var-hourglass: "\f254"; +@fa-var-hourglass-1: "\f251"; +@fa-var-hourglass-2: "\f252"; +@fa-var-hourglass-3: "\f253"; +@fa-var-hourglass-end: "\f253"; +@fa-var-hourglass-half: "\f252"; +@fa-var-hourglass-o: "\f250"; +@fa-var-hourglass-start: "\f251"; +@fa-var-houzz: "\f27c"; +@fa-var-html5: "\f13b"; +@fa-var-i-cursor: "\f246"; +@fa-var-id-badge: "\f2c1"; +@fa-var-id-card: "\f2c2"; +@fa-var-id-card-o: "\f2c3"; +@fa-var-ils: "\f20b"; +@fa-var-image: "\f03e"; +@fa-var-imdb: "\f2d8"; +@fa-var-inbox: "\f01c"; +@fa-var-indent: "\f03c"; +@fa-var-industry: "\f275"; +@fa-var-info: "\f129"; +@fa-var-info-circle: "\f05a"; +@fa-var-inr: "\f156"; +@fa-var-instagram: "\f16d"; +@fa-var-institution: "\f19c"; +@fa-var-internet-explorer: "\f26b"; +@fa-var-intersex: "\f224"; +@fa-var-ioxhost: "\f208"; +@fa-var-italic: "\f033"; +@fa-var-joomla: "\f1aa"; +@fa-var-jpy: "\f157"; +@fa-var-jsfiddle: "\f1cc"; +@fa-var-key: "\f084"; +@fa-var-keyboard-o: "\f11c"; +@fa-var-krw: "\f159"; +@fa-var-language: "\f1ab"; +@fa-var-laptop: "\f109"; +@fa-var-lastfm: "\f202"; +@fa-var-lastfm-square: "\f203"; +@fa-var-leaf: "\f06c"; +@fa-var-leanpub: "\f212"; +@fa-var-legal: "\f0e3"; +@fa-var-lemon-o: "\f094"; +@fa-var-level-down: "\f149"; +@fa-var-level-up: "\f148"; +@fa-var-life-bouy: "\f1cd"; +@fa-var-life-buoy: "\f1cd"; +@fa-var-life-ring: "\f1cd"; +@fa-var-life-saver: "\f1cd"; +@fa-var-lightbulb-o: "\f0eb"; +@fa-var-line-chart: "\f201"; +@fa-var-link: "\f0c1"; +@fa-var-linkedin: "\f0e1"; +@fa-var-linkedin-square: "\f08c"; +@fa-var-linode: "\f2b8"; +@fa-var-linux: "\f17c"; +@fa-var-list: "\f03a"; +@fa-var-list-alt: "\f022"; +@fa-var-list-ol: "\f0cb"; +@fa-var-list-ul: "\f0ca"; +@fa-var-location-arrow: "\f124"; +@fa-var-lock: "\f023"; +@fa-var-long-arrow-down: "\f175"; +@fa-var-long-arrow-left: "\f177"; +@fa-var-long-arrow-right: "\f178"; +@fa-var-long-arrow-up: "\f176"; +@fa-var-low-vision: "\f2a8"; +@fa-var-magic: "\f0d0"; +@fa-var-magnet: "\f076"; +@fa-var-mail-forward: "\f064"; +@fa-var-mail-reply: "\f112"; +@fa-var-mail-reply-all: "\f122"; +@fa-var-male: "\f183"; +@fa-var-map: "\f279"; +@fa-var-map-marker: "\f041"; +@fa-var-map-o: "\f278"; +@fa-var-map-pin: "\f276"; +@fa-var-map-signs: "\f277"; +@fa-var-mars: "\f222"; +@fa-var-mars-double: "\f227"; +@fa-var-mars-stroke: "\f229"; +@fa-var-mars-stroke-h: "\f22b"; +@fa-var-mars-stroke-v: "\f22a"; +@fa-var-maxcdn: "\f136"; +@fa-var-meanpath: "\f20c"; +@fa-var-medium: "\f23a"; +@fa-var-medkit: "\f0fa"; +@fa-var-meetup: "\f2e0"; +@fa-var-meh-o: "\f11a"; +@fa-var-mercury: "\f223"; +@fa-var-microchip: "\f2db"; +@fa-var-microphone: "\f130"; +@fa-var-microphone-slash: "\f131"; +@fa-var-minus: "\f068"; +@fa-var-minus-circle: "\f056"; +@fa-var-minus-square: "\f146"; +@fa-var-minus-square-o: "\f147"; +@fa-var-mixcloud: "\f289"; +@fa-var-mobile: "\f10b"; +@fa-var-mobile-phone: "\f10b"; +@fa-var-modx: "\f285"; +@fa-var-money: "\f0d6"; +@fa-var-moon-o: "\f186"; +@fa-var-mortar-board: "\f19d"; +@fa-var-motorcycle: "\f21c"; +@fa-var-mouse-pointer: "\f245"; +@fa-var-music: "\f001"; +@fa-var-navicon: "\f0c9"; +@fa-var-neuter: "\f22c"; +@fa-var-newspaper-o: "\f1ea"; +@fa-var-object-group: "\f247"; +@fa-var-object-ungroup: "\f248"; +@fa-var-odnoklassniki: "\f263"; +@fa-var-odnoklassniki-square: "\f264"; +@fa-var-opencart: "\f23d"; +@fa-var-openid: "\f19b"; +@fa-var-opera: "\f26a"; +@fa-var-optin-monster: "\f23c"; +@fa-var-outdent: "\f03b"; +@fa-var-pagelines: "\f18c"; +@fa-var-paint-brush: "\f1fc"; +@fa-var-paper-plane: "\f1d8"; +@fa-var-paper-plane-o: "\f1d9"; +@fa-var-paperclip: "\f0c6"; +@fa-var-paragraph: "\f1dd"; +@fa-var-paste: "\f0ea"; +@fa-var-pause: "\f04c"; +@fa-var-pause-circle: "\f28b"; +@fa-var-pause-circle-o: "\f28c"; +@fa-var-paw: "\f1b0"; +@fa-var-paypal: "\f1ed"; +@fa-var-pencil: "\f040"; +@fa-var-pencil-square: "\f14b"; +@fa-var-pencil-square-o: "\f044"; +@fa-var-percent: "\f295"; +@fa-var-phone: "\f095"; +@fa-var-phone-square: "\f098"; +@fa-var-photo: "\f03e"; +@fa-var-picture-o: "\f03e"; +@fa-var-pie-chart: "\f200"; +@fa-var-pied-piper: "\f2ae"; +@fa-var-pied-piper-alt: "\f1a8"; +@fa-var-pied-piper-pp: "\f1a7"; +@fa-var-pinterest: "\f0d2"; +@fa-var-pinterest-p: "\f231"; +@fa-var-pinterest-square: "\f0d3"; +@fa-var-plane: "\f072"; +@fa-var-play: "\f04b"; +@fa-var-play-circle: "\f144"; +@fa-var-play-circle-o: "\f01d"; +@fa-var-plug: "\f1e6"; +@fa-var-plus: "\f067"; +@fa-var-plus-circle: "\f055"; +@fa-var-plus-square: "\f0fe"; +@fa-var-plus-square-o: "\f196"; +@fa-var-podcast: "\f2ce"; +@fa-var-power-off: "\f011"; +@fa-var-print: "\f02f"; +@fa-var-product-hunt: "\f288"; +@fa-var-puzzle-piece: "\f12e"; +@fa-var-qq: "\f1d6"; +@fa-var-qrcode: "\f029"; +@fa-var-question: "\f128"; +@fa-var-question-circle: "\f059"; +@fa-var-question-circle-o: "\f29c"; +@fa-var-quora: "\f2c4"; +@fa-var-quote-left: "\f10d"; +@fa-var-quote-right: "\f10e"; +@fa-var-ra: "\f1d0"; +@fa-var-random: "\f074"; +@fa-var-ravelry: "\f2d9"; +@fa-var-rebel: "\f1d0"; +@fa-var-recycle: "\f1b8"; +@fa-var-reddit: "\f1a1"; +@fa-var-reddit-alien: "\f281"; +@fa-var-reddit-square: "\f1a2"; +@fa-var-refresh: "\f021"; +@fa-var-registered: "\f25d"; +@fa-var-remove: "\f00d"; +@fa-var-renren: "\f18b"; +@fa-var-reorder: "\f0c9"; +@fa-var-repeat: "\f01e"; +@fa-var-reply: "\f112"; +@fa-var-reply-all: "\f122"; +@fa-var-resistance: "\f1d0"; +@fa-var-retweet: "\f079"; +@fa-var-rmb: "\f157"; +@fa-var-road: "\f018"; +@fa-var-rocket: "\f135"; +@fa-var-rotate-left: "\f0e2"; +@fa-var-rotate-right: "\f01e"; +@fa-var-rouble: "\f158"; +@fa-var-rss: "\f09e"; +@fa-var-rss-square: "\f143"; +@fa-var-rub: "\f158"; +@fa-var-ruble: "\f158"; +@fa-var-rupee: "\f156"; +@fa-var-s15: "\f2cd"; +@fa-var-safari: "\f267"; +@fa-var-save: "\f0c7"; +@fa-var-scissors: "\f0c4"; +@fa-var-scribd: "\f28a"; +@fa-var-search: "\f002"; +@fa-var-search-minus: "\f010"; +@fa-var-search-plus: "\f00e"; +@fa-var-sellsy: "\f213"; +@fa-var-send: "\f1d8"; +@fa-var-send-o: "\f1d9"; +@fa-var-server: "\f233"; +@fa-var-share: "\f064"; +@fa-var-share-alt: "\f1e0"; +@fa-var-share-alt-square: "\f1e1"; +@fa-var-share-square: "\f14d"; +@fa-var-share-square-o: "\f045"; +@fa-var-shekel: "\f20b"; +@fa-var-sheqel: "\f20b"; +@fa-var-shield: "\f132"; +@fa-var-ship: "\f21a"; +@fa-var-shirtsinbulk: "\f214"; +@fa-var-shopping-bag: "\f290"; +@fa-var-shopping-basket: "\f291"; +@fa-var-shopping-cart: "\f07a"; +@fa-var-shower: "\f2cc"; +@fa-var-sign-in: "\f090"; +@fa-var-sign-language: "\f2a7"; +@fa-var-sign-out: "\f08b"; +@fa-var-signal: "\f012"; +@fa-var-signing: "\f2a7"; +@fa-var-simplybuilt: "\f215"; +@fa-var-sitemap: "\f0e8"; +@fa-var-skyatlas: "\f216"; +@fa-var-skype: "\f17e"; +@fa-var-slack: "\f198"; +@fa-var-sliders: "\f1de"; +@fa-var-slideshare: "\f1e7"; +@fa-var-smile-o: "\f118"; +@fa-var-snapchat: "\f2ab"; +@fa-var-snapchat-ghost: "\f2ac"; +@fa-var-snapchat-square: "\f2ad"; +@fa-var-snowflake-o: "\f2dc"; +@fa-var-soccer-ball-o: "\f1e3"; +@fa-var-sort: "\f0dc"; +@fa-var-sort-alpha-asc: "\f15d"; +@fa-var-sort-alpha-desc: "\f15e"; +@fa-var-sort-amount-asc: "\f160"; +@fa-var-sort-amount-desc: "\f161"; +@fa-var-sort-asc: "\f0de"; +@fa-var-sort-desc: "\f0dd"; +@fa-var-sort-down: "\f0dd"; +@fa-var-sort-numeric-asc: "\f162"; +@fa-var-sort-numeric-desc: "\f163"; +@fa-var-sort-up: "\f0de"; +@fa-var-soundcloud: "\f1be"; +@fa-var-space-shuttle: "\f197"; +@fa-var-spinner: "\f110"; +@fa-var-spoon: "\f1b1"; +@fa-var-spotify: "\f1bc"; +@fa-var-square: "\f0c8"; +@fa-var-square-o: "\f096"; +@fa-var-stack-exchange: "\f18d"; +@fa-var-stack-overflow: "\f16c"; +@fa-var-star: "\f005"; +@fa-var-star-half: "\f089"; +@fa-var-star-half-empty: "\f123"; +@fa-var-star-half-full: "\f123"; +@fa-var-star-half-o: "\f123"; +@fa-var-star-o: "\f006"; +@fa-var-steam: "\f1b6"; +@fa-var-steam-square: "\f1b7"; +@fa-var-step-backward: "\f048"; +@fa-var-step-forward: "\f051"; +@fa-var-stethoscope: "\f0f1"; +@fa-var-sticky-note: "\f249"; +@fa-var-sticky-note-o: "\f24a"; +@fa-var-stop: "\f04d"; +@fa-var-stop-circle: "\f28d"; +@fa-var-stop-circle-o: "\f28e"; +@fa-var-street-view: "\f21d"; +@fa-var-strikethrough: "\f0cc"; +@fa-var-stumbleupon: "\f1a4"; +@fa-var-stumbleupon-circle: "\f1a3"; +@fa-var-subscript: "\f12c"; +@fa-var-subway: "\f239"; +@fa-var-suitcase: "\f0f2"; +@fa-var-sun-o: "\f185"; +@fa-var-superpowers: "\f2dd"; +@fa-var-superscript: "\f12b"; +@fa-var-support: "\f1cd"; +@fa-var-table: "\f0ce"; +@fa-var-tablet: "\f10a"; +@fa-var-tachometer: "\f0e4"; +@fa-var-tag: "\f02b"; +@fa-var-tags: "\f02c"; +@fa-var-tasks: "\f0ae"; +@fa-var-taxi: "\f1ba"; +@fa-var-telegram: "\f2c6"; +@fa-var-television: "\f26c"; +@fa-var-tencent-weibo: "\f1d5"; +@fa-var-terminal: "\f120"; +@fa-var-text-height: "\f034"; +@fa-var-text-width: "\f035"; +@fa-var-th: "\f00a"; +@fa-var-th-large: "\f009"; +@fa-var-th-list: "\f00b"; +@fa-var-themeisle: "\f2b2"; +@fa-var-thermometer: "\f2c7"; +@fa-var-thermometer-0: "\f2cb"; +@fa-var-thermometer-1: "\f2ca"; +@fa-var-thermometer-2: "\f2c9"; +@fa-var-thermometer-3: "\f2c8"; +@fa-var-thermometer-4: "\f2c7"; +@fa-var-thermometer-empty: "\f2cb"; +@fa-var-thermometer-full: "\f2c7"; +@fa-var-thermometer-half: "\f2c9"; +@fa-var-thermometer-quarter: "\f2ca"; +@fa-var-thermometer-three-quarters: "\f2c8"; +@fa-var-thumb-tack: "\f08d"; +@fa-var-thumbs-down: "\f165"; +@fa-var-thumbs-o-down: "\f088"; +@fa-var-thumbs-o-up: "\f087"; +@fa-var-thumbs-up: "\f164"; +@fa-var-ticket: "\f145"; +@fa-var-times: "\f00d"; +@fa-var-times-circle: "\f057"; +@fa-var-times-circle-o: "\f05c"; +@fa-var-times-rectangle: "\f2d3"; +@fa-var-times-rectangle-o: "\f2d4"; +@fa-var-tint: "\f043"; +@fa-var-toggle-down: "\f150"; +@fa-var-toggle-left: "\f191"; +@fa-var-toggle-off: "\f204"; +@fa-var-toggle-on: "\f205"; +@fa-var-toggle-right: "\f152"; +@fa-var-toggle-up: "\f151"; +@fa-var-trademark: "\f25c"; +@fa-var-train: "\f238"; +@fa-var-transgender: "\f224"; +@fa-var-transgender-alt: "\f225"; +@fa-var-trash: "\f1f8"; +@fa-var-trash-o: "\f014"; +@fa-var-tree: "\f1bb"; +@fa-var-trello: "\f181"; +@fa-var-tripadvisor: "\f262"; +@fa-var-trophy: "\f091"; +@fa-var-truck: "\f0d1"; +@fa-var-try: "\f195"; +@fa-var-tty: "\f1e4"; +@fa-var-tumblr: "\f173"; +@fa-var-tumblr-square: "\f174"; +@fa-var-turkish-lira: "\f195"; +@fa-var-tv: "\f26c"; +@fa-var-twitch: "\f1e8"; +@fa-var-twitter: "\f099"; +@fa-var-twitter-square: "\f081"; +@fa-var-umbrella: "\f0e9"; +@fa-var-underline: "\f0cd"; +@fa-var-undo: "\f0e2"; +@fa-var-universal-access: "\f29a"; +@fa-var-university: "\f19c"; +@fa-var-unlink: "\f127"; +@fa-var-unlock: "\f09c"; +@fa-var-unlock-alt: "\f13e"; +@fa-var-unsorted: "\f0dc"; +@fa-var-upload: "\f093"; +@fa-var-usb: "\f287"; +@fa-var-usd: "\f155"; +@fa-var-user: "\f007"; +@fa-var-user-circle: "\f2bd"; +@fa-var-user-circle-o: "\f2be"; +@fa-var-user-md: "\f0f0"; +@fa-var-user-o: "\f2c0"; +@fa-var-user-plus: "\f234"; +@fa-var-user-secret: "\f21b"; +@fa-var-user-times: "\f235"; +@fa-var-users: "\f0c0"; +@fa-var-vcard: "\f2bb"; +@fa-var-vcard-o: "\f2bc"; +@fa-var-venus: "\f221"; +@fa-var-venus-double: "\f226"; +@fa-var-venus-mars: "\f228"; +@fa-var-viacoin: "\f237"; +@fa-var-viadeo: "\f2a9"; +@fa-var-viadeo-square: "\f2aa"; +@fa-var-video-camera: "\f03d"; +@fa-var-vimeo: "\f27d"; +@fa-var-vimeo-square: "\f194"; +@fa-var-vine: "\f1ca"; +@fa-var-vk: "\f189"; +@fa-var-volume-control-phone: "\f2a0"; +@fa-var-volume-down: "\f027"; +@fa-var-volume-off: "\f026"; +@fa-var-volume-up: "\f028"; +@fa-var-warning: "\f071"; +@fa-var-wechat: "\f1d7"; +@fa-var-weibo: "\f18a"; +@fa-var-weixin: "\f1d7"; +@fa-var-whatsapp: "\f232"; +@fa-var-wheelchair: "\f193"; +@fa-var-wheelchair-alt: "\f29b"; +@fa-var-wifi: "\f1eb"; +@fa-var-wikipedia-w: "\f266"; +@fa-var-window-close: "\f2d3"; +@fa-var-window-close-o: "\f2d4"; +@fa-var-window-maximize: "\f2d0"; +@fa-var-window-minimize: "\f2d1"; +@fa-var-window-restore: "\f2d2"; +@fa-var-windows: "\f17a"; +@fa-var-won: "\f159"; +@fa-var-wordpress: "\f19a"; +@fa-var-wpbeginner: "\f297"; +@fa-var-wpexplorer: "\f2de"; +@fa-var-wpforms: "\f298"; +@fa-var-wrench: "\f0ad"; +@fa-var-xing: "\f168"; +@fa-var-xing-square: "\f169"; +@fa-var-y-combinator: "\f23b"; +@fa-var-y-combinator-square: "\f1d4"; +@fa-var-yahoo: "\f19e"; +@fa-var-yc: "\f23b"; +@fa-var-yc-square: "\f1d4"; +@fa-var-yelp: "\f1e9"; +@fa-var-yen: "\f157"; +@fa-var-yoast: "\f2b1"; +@fa-var-youtube: "\f167"; +@fa-var-youtube-play: "\f16a"; +@fa-var-youtube-square: "\f166"; + diff --git a/_site/site/public/font-awesome-4.7.0/scss/font-awesome.scss b/_site/site/public/font-awesome-4.7.0/scss/font-awesome.scss new file mode 100755 index 00000000..f1c83aaa --- /dev/null +++ b/_site/site/public/font-awesome-4.7.0/scss/font-awesome.scss @@ -0,0 +1,18 @@ +/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */ + +@import "variables"; +@import "mixins"; +@import "path"; +@import "core"; +@import "larger"; +@import "fixed-width"; +@import "list"; +@import "bordered-pulled"; +@import "animated"; +@import "rotated-flipped"; +@import "stacked"; +@import "icons"; +@import "screen-reader"; diff --git a/_site/site/tags.html b/_site/site/tags.html new file mode 100644 index 00000000..6442c184 --- /dev/null +++ b/_site/site/tags.html @@ -0,0 +1,5006 @@ + + + + + +

+ + + + +

1999

+ + + +

2002

+ + + +

2006

+ + + +

2010

+ + + +

2012

+ + + +

2013

+ + + +

2014

+ + + +

2015

+ + + +

2016

+ + + +

2017

+ + + +

2018

+ + + +

2019

+ + + +

2020

+ + + +

2021

+ + + +

2023

+ + + +

AAAI

+ + + +

AAAI 2018

+ + + +

AAMAS

+ + + +

AAMAS 2019

+ + + +

ACID

+ + + +

ACL

+ + + +

ACL 2015

+ + + +

ACL 2016

+ + + +

ACL 2017

+ + + +

ACM

+ + + +

AI

+ + + +

Abductive Reasoning

+ + + +

Abstract Summarization

+ + + +

Accelerated Training

+ + + +

Activation

+ + + +

Activation Function

+ + + +

Adapter

+ + + +

Ads

+ + + +

Adversarial

+ + + +

Adversarial Robustness

+ + + +

Apache

+ + + +

Attention

+ + + +

BASE

+ + + +

BN

+ + + +

Batch Normalisation

+ + + +

BatchNorm

+ + + +

Benchmark

+ + + +

Big Data

+ + + +

Build System

+ + + +

CAP

+ + + +

CL

+ + + +

CTR

+ + + +

CV

+ + + +

CVPR

+ + + +

CVPR 2016

+ + + +

CVPR 2017

+ + + +

Calibration

+ + + +

Catastrophic Forgetting

+ + + +

Causal Learning

+ + + +

Causality

+ + + +

Chemistry

+ + + +

Classifier

+ + + +

Click-Through Rate

+ + + +

Clustering

+ + + +

Compositionality

+ + + +

Conditional Computation

+ + + +

Container

+ + + +

Continual Learning

+ + + +

Contrastive

+ + + +

Contrastive Learning

+ + + +

Conversational Agent

+ + + +

Count Based VQA

+ + + +

Credit Assignment

+ + + +

Curriculum Learning

+ + + +

DBMS

+ + + +

DRL

+ + + +

Data

+ + + +

Data Augmentation

+ + + +

Data Mining

+ + + +

Database

+ + + +

Dataset

+ + + +

Decentralized Reinforcement Learning

+ + + +

Deep Reinforcement Learning

+ + + +

Dependency Parsing

+ + + +

Design Pattern

+ + + +

Distributed Computing

+ + + +

Distributed Reinforcement Learning

+ + + +

Distributed SGD

+ + + +

Distributed Systems

+ + + +

Dynamical System

+ + + +

EBM

+ + + +

ECCV

+ + + +

ECCV 2010

+ + + +

EMNLP

+ + + +

EMNLP 2014

+ + + +

EMNLP 2016

+ + + +

EMNLP 2017

+ + + +

EMNLP 2019

+ + + +

ERM

+ + + +

Economics

+ + + +

Embedding

+ + + +

Emergent Language

+ + + +

Empirical

+ + + +

Empirical Advice

+ + + +

Energy-Based Models

+ + + +

Engineering

+ + + +

Entropy

+ + + +

Environment

+ + + +

Evaluating Generalization

+ + + +

Evaluation

+ + + +

Explainability

+ + + +

Exploration

+ + + +

Factorization

+ + + +

Finetuning

+ + + +

GNN

+ + + +

GPT

+ + + +

Gating

+ + + +

Generalizatio

+ + + +

Generalization

+ + + +

Generative Models

+ + + +

Geometry

+ + + +

Gradient Manipulation

+ + + +

Gradient Normalization

+ + + +

Graph

+ + + +

Graph Neural Network

+ + + +

Graph Representation

+ + + +

Grounded Language Learning

+ + + +

HRL

+ + + +

Hierarchial RNN

+ + + +

Hierarchical RL

+ + + +

Hierarchical Reinforcement Learning

+ + + +

Hybrid Models

+ + + +

HyperNetwork

+ + + +

Hyperbolic Embedding

+ + + +

Hyperboloid Model

+ + + +

Hypothesis

+ + + +

ICCV

+ + + +

ICCV 2015

+ + + +

ICLR

+ + + +

ICLR 2014

+ + + +

ICLR 2015

+ + + +

ICLR 2016

+ + + +

ICLR 2017

+ + + +

ICLR 2018

+ + + +

ICLR 2018'

+ + + +

ICLR 2019

+ + + +

ICLR 2020

+ + + +

ICLR 2021

+ + + +

ICML

+ + + +

ICML 2016

+ + + +

ICML 2017

+ + + +

ICML 2018

+ + + +

ICML 2019

+ + + +

ICML 2020

+ + + +

ICML 2020'

+ + + +

IEEE

+ + + +

IRL

+ + + +

ImageNet

+ + + +

Incremental Learning

+ + + +

Information Retrieval

+ + + +

Information Theory

+ + + +

Initialization

+ + + +

Interactive Teaching

+ + + +

Interpretability

+ + + +

Invariance

+ + + +

Inverse Reinforcement Learning

+ + + +

KD

+ + + +

KDD

+ + + +

KDD 2013

+ + + +

KDD 2014

+ + + +

KDD 2015

+ + + +

KDD 2017

+ + + +

KRU

+ + + +

Kernel

+ + + +

Key Value

+ + + +

Knowledge Distillation

+ + + +

Knowledge Transfer

+ + + +

Kronecker

+ + + +

LL

+ + + +

LLM

+ + + +

LR

+ + + +

Large Language Model

+ + + +

Latency

+ + + +

Latent Variable

+ + + +

Learning Optimizer'

+ + + +

Learning Rate

+ + + +

Lifelong Learning

+ + + +

Linear Algebra

+ + + +

Linear Model

+ + + +

Long-tailed Dataset

+ + + +

Loss

+ + + +

Loss Function

+ + + +

MAML

+ + + +

MANN

+ + + +

MDP

+ + + +

ML

+ + + +

MPNN

+ + + +

Machine Comprehension

+ + + +

Machine Learning

+ + + +

Markov Decision Process

+ + + +

Matrix

+ + + +

Matrix Factorization

+ + + +

Memory

+ + + +

Memory Augmented Neural Network

+ + + +

Message Passing

+ + + +

Meta Learning

+ + + +

Meta Reinforcement Learning

+ + + +

Mixture of Experts

+ + + +

Model Parallelism

+ + + +

Model-Based

+ + + +

Model-Free

+ + + +

Modular ML

+ + + +

Modular Meta Learning

+ + + +

Modular Network

+ + + +

Module

+ + + +

Motif

+ + + +

Mujoco

+ + + +

Multi Domain

+ + + +

Multi Modal

+ + + +

Multi Model

+ + + +

Multi Task

+ + + +

Multi-Agent

+ + + +

NIPS

+ + + +

NIPS 2014

+ + + +

NIPS 2015

+ + + +

NIPS 2017

+ + + +

NIPS Workskop

+ + + +

NLG

+ + + +

NLI

+ + + +

NLP

+ + + +

NMT

+ + + +

Natural Language Inference

+ + + +

Natural Language Processing

+ + + +

Network

+ + + +

Network Embedding

+ + + +

NeurIPS

+ + + +

NeurIPS 2018

+ + + +

NeurIPS 2019

+ + + +

NeurIPS 2020

+ + + +

NeurIPS Workshop 2018

+ + + +

Neural Computation

+ + + +

Neural Computation 2002

+ + + +

Neural Machine Translation

+ + + +

Neural Message Passing

+ + + +

Neural Module Network

+ + + +

Neurips

+ + + +

Neurips 2018

+ + + +

Neurips 2019

+ + + +

Normalization

+ + + +

OPT

+ + + +

OS

+ + + +

Object-Oriented Learning

+ + + +

Off policy RL

+ + + +

One shot learning

+ + + +

Online Learning

+ + + +

Operating Systems

+ + + +

Optimizer

+ + + +

Out of Distribution

+ + + +

Out of Distribution Detection

+ + + +

Out of Vocabulary Words

+ + + +

Outlier Detection

+ + + +

POS

+ + + +

Physical Reasoning

+ + + +

Physics

+ + + +

Planning

+ + + +

Poincare Ball Model

+ + + +

Pointer Network

+ + + +

Pooling

+ + + +

PreTrained Langauge Model

+ + + +

Pretraining

+ + + +

Procedural Text

+ + + +

Pruning Network

+ + + +

QA

+ + + +

RL

+ + + +

RNN

+ + + +

RRL

+ + + +

Ranking

+ + + +

Reasoning

+ + + +

Recommender

+ + + +

Recommender Systems

+ + + +

Recurrent Neural Network

+ + + +

Reinforcement Learning

+ + + +

Relation Learning

+ + + +

Relational Inference

+ + + +

Relational Learning

+ + + +

Relational Network

+ + + +

Replay Buffer

+ + + +

Representation Analysis

+ + + +

Representation Learning

+ + + +

Robustness

+ + + +

SAT

+ + + +

SGD

+ + + +

SOTA

+ + + +

SSL

+ + + +

SSO

+ + + +

SWA

+ + + +

Sample Efficient

+ + + +

Scale

+ + + +

Science

+ + + +

Science 2002

+ + + +

Science 2016

+ + + +

Self Gated

+ + + +

Self Supervised

+ + + +

Semantic Loss

+ + + +

Sentiment Analysis

+ + + +

Seq2Seq

+ + + +

Sequential models

+ + + +

Set

+ + + +

Siamese

+ + + +

Softmax

+ + + +

Software

+ + + +

Software Engineering

+ + + +

Speech

+ + + +

State Abstraction

+ + + +

Stochastic Gradient Descent

+ + + +

Structured Exploration

+ + + +

Summarization

+ + + +

Symbolic Knowledge

+ + + +

Synchronous SGD

+ + + +

Systems

+ + + +

Technical Debt

+ + + +

Text-to-Text Transformer

+ + + +

Theory

+ + + +

Transfer Learning

+ + + +

Transformer

+ + + +

Tree

+ + + +

Tucker Decomposition

+ + + +

UAI

+ + + +

UAI 2018

+ + + +

USENIX

+ + + +

Unsupervised

+ + + +

VAE

+ + + +

VQA

+ + + +

Virtual Embodiment

+ + + +

WACV

+ + + +

WACV 2017

+ + + +

Weight Adaptation

+ + + +

Word Vectors

+ + + +

Workshop

+ + + +

Zero Shot Generalization

+ + + +

Zero-Shot

+ + +
diff --git a/site/_posts/2023-02-10-Toolformer - Language Models Can Teach Themselves to Use Tools.md b/site/_posts/2023-02-10-Toolformer - Language Models Can Teach Themselves to Use Tools.md index bcd511dd..570a84e9 100755 --- a/site/_posts/2023-02-10-Toolformer - Language Models Can Teach Themselves to Use Tools.md +++ b/site/_posts/2023-02-10-Toolformer - Language Models Can Teach Themselves to Use Tools.md @@ -25,7 +25,7 @@ tags: - Starting with a language model, M, the goal is to enable the language model to use tools by invoking API calls. -- An API call is denoted by the tuple $c = (api-name, api-input)$. It can be linearized as $e(c) = [api-name(api-input)]$ or as $e(c, r) = [api-name(api-input) -> r]$ where $r$ denotes the result of the API. +- An API call is denoted by the tuple $c =$ (api_name, api_input). It can be linearized as $e(c) =$ [api_name(api_input)$]$ or as $e(c, r) = [$api_name(api_input) $ -> r]$ where $r$ denotes the result of the API. - The given dataset of plain text, $C$, is converted into a dataset $C*$ augmented with the API calls using a three-step process. diff --git a/site/_site b/site/_site index 595a66b8..f852505c 160000 --- a/site/_site +++ b/site/_site @@ -1 +1 @@ -Subproject commit 595a66b8361d6a240aafa6bb4450f0133b6a7a96 +Subproject commit f852505cc19589ddf6596d882825bea7293282e9