Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
-
Updated
Dec 28, 2024 - Python
Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts
Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scenarios
ImageNet pre-trained models with batch normalization for the Caffe framework
Fine-tuning code for CLIP models
[SOTA] [92% acc] 786M-8k-44L-32H multi-instrumental music transformer with true full MIDI instruments range, efficient encoding, octo-velocity and outro tokens
Use FastSpeech2 and HiFi-GAN to easily perform end-to-end Korean speech synthesis.
TensorFlow Implementation of Manifold Regularized Convolutional Neural Networks.
🚂 Fine-tune OpenAI models for text classification, question answering, and more
Sparse Autoencoders (SAE) vs CLIP fine-tuning fun.
[Bachelor Graduation Project] Use Xception model for face anti-spoofing
This repository contains the source code for the first and the second task of DeftEval 2020 competition, used by the University Politehnica of Bucharest (UPB) team to train and evaluate the models.
🌹[ICML 2024] Selecting Large Language Model to Fine-tune via Rectified Scaling Law
Geometric Parametrization GmP-Inf-CLIP modification of: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A super memory-efficiency CLIP training scheme.
Add a description, image, and links to the fine-tune topic page so that developers can more easily learn about it.
To associate your repository with the fine-tune topic, visit your repo's landing page and select "manage topics."