Abstract: The goal of textual adversarial attack methods is to replace some words in an input text in order to make the victim model misbehave. This article proposes an effective word-level ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results