A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
-
Updated
Sep 16, 2024 - Python
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
[ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Collection of awesome parameter-efficient fine-tuning resources.
This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
[SIGIR'24] The official implementation code of MOELoRA.
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
This repository contain a project which goal is to find new parameter efficient fine tuning framework in order to improve performance of Deep Artificial Neural Network onto "out of distribution" data (OOD). In this specific case you can find Multi-task Learning problem.
Fine-Tuned LLM-Based FAQ Generation for University Admissions: A project involving the fine-tuning of state-of-the-art language models, including LLaMA-3 8b, LLaMA-2 7b, Mistral 7b, T5, and BART, leveraging QLoRA PEFT.
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis
[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App
Add a description, image, and links to the parameter-efficient-fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the parameter-efficient-fine-tuning topic, visit your repo's landing page and select "manage topics."