Skip to content

Commit

Permalink
Update papers.json
Browse files Browse the repository at this point in the history
  • Loading branch information
jS5t3r committed Dec 13, 2024
1 parent 77606a3 commit 71fffb1
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions assets/json/model_stealing_papers.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
[
{
"date": "2024-12",
"title": "A Unified Model For Voice and Accent Conversion In Speech and Singing using Self-Supervised Learning and Feature Extraction",
"author": "Sowmya Cheripally",
"link": "http://arxiv.org/abs/2412.08312v1",
"abstract": "This paper presents a new voice conversion model capable of transforming both\nspeaking and singing voices. It addresses key challenges in current systems,\nsuch as conveying emotions, managing pronunciation and accent changes, and\nreproducing non-verbal sounds. One of the model's standout features is its\nability to perform accent conversion on hybrid voice samples that encompass\nboth speech and singing, allowing it to change the speaker's accent while\npreserving the original content and prosody. The proposed model uses an\nencoder-decoder architecture: the encoder is based on HuBERT to process the\nspeech's acoustic and linguistic content, while the HiFi-GAN decoder audio\nmatches the target speaker's voice. The model incorporates fundamental\nfrequency (f0) features and singer embeddings to enhance performance while\nensuring the pitch & tone accuracy and vocal identity are preserved during\ntransformation. This approach improves how naturally and flexibly voice style\ncan be transformed, showing strong potential for applications in voice dubbing,\ncontent creation, and technologies like Text-to-Speech (TTS) and Interactive\nVoice Response (IVR) systems."
},
{
"date": "2024-12",
"title": "Large Language Models Merging for Enhancing the Link Stealing Attack on Graph Neural Networks",
Expand Down Expand Up @@ -1391,12 +1398,5 @@
"author": "Sixiao Zhang, Hongzhi Yin, Hongxu Chen, and Cheng Long",
"link": "http://arxiv.org/abs/2310.16335v1",
"abstract": "The robustness of recommender systems has become a prominent topic within the\nresearch community. Numerous adversarial attacks have been proposed, but most\nof them rely on extensive prior knowledge, such as all the white-box attacks or\nmost of the black-box attacks which assume that certain external knowledge is\navailable. Among these attacks, the model extraction attack stands out as a\npromising and practical method, involving training a surrogate model by\nrepeatedly querying the target model. However, there is a significant gap in\nthe existing literature when it comes to defending against model extraction\nattacks on recommender systems. In this paper, we introduce Gradient-based\nRanking Optimization (GRO), which is the first defense strategy designed to\ncounter such attacks. We formalize the defense as an optimization problem,\naiming to minimize the loss of the protected target model while maximizing the\nloss of the attacker's surrogate model. Since top-k ranking lists are\nnon-differentiable, we transform them into swap matrices which are instead\ndifferentiable. These swap matrices serve as input to a student model that\nemulates the surrogate model's behavior. By back-propagating the loss of the\nstudent model, we obtain gradients for the swap matrices. These gradients are\nused to compute a swap loss, which maximizes the loss of the student model. We\nconducted experiments on three benchmark datasets to evaluate the performance\nof GRO, and the results demonstrate its superior effectiveness in defending\nagainst model extraction attacks."
},
{
"date": "2023-10",
"title": "Efficient Data Learning for Open Information Extraction with Pre-trained Language Models",
"author": "Zhiyuan Fan, and Shizhu He",
"link": "http://arxiv.org/abs/2310.15021v2",
"abstract": "Open Information Extraction (OpenIE) is a fundamental yet challenging task in\nNatural Language Processing, which involves extracting all triples (subject,\npredicate, object) from a given sentence. While labeling-based methods have\ntheir merits, generation-based techniques offer unique advantages, such as the\nability to generate tokens not present in the original sentence. However, these\ngeneration-based methods often require a significant amount of training data to\nlearn the task form of OpenIE and substantial training time to overcome slow\nmodel convergence due to the order penalty. In this paper, we introduce a novel\nframework, OK-IE, that ingeniously transforms the task form of OpenIE into the\npre-training task form of the T5 model, thereby reducing the need for extensive\ntraining data. Furthermore, we introduce an innovative concept of Anchor to\ncontrol the sequence of model outputs, effectively eliminating the impact of\norder penalty on model convergence and significantly reducing training time.\nExperimental results indicate that, compared to previous SOTA methods, OK-IE\nrequires only 1/100 of the training data (900 instances) and 1/120 of the\ntraining time (3 minutes) to achieve comparable results."
}
]

0 comments on commit 71fffb1

Please sign in to comment.