VT-MCNet: High-Accuracy Automatic Modulation Classification Model based on Vision Transformer
Citation:
Thien-Thanh Dao, Dae-Il Noh, Quoc-Viet Pham, Mikio Hasegawa, Hiroo Sekiya, Won-Joo Hwang, VT-MCNet: High-Accuracy Automatic Modulation Classification Model based on Vision Transformer, IEEE Communications Letters, 28, 1, 2024, 98 - 102Abstract:
Cognitive radio networks’ evolution hinges significantly on the use of automatic modulation classification (AMC).
However, existing research reveals limitations in attaining high AMC accuracy due to ineffective feature extraction from signals.
To counter this, we propose a vision-centric approach employing
diverse kernel sizes to augment signal extraction. In addition,
we refine the transformer architecture by incorporating a dual-branch multi-layer perceptron network, enabling diverse pattern learning and enhancing the model’s running speed. Specifically,
our architecture allows the system to focus on relevant portions
of the input sequence, thus, it improves classification accuracy for
both high and low signal-to-noise regimes. By utilizing the widely
recognized DeepSig dataset, our pioneering deep model, termed
as VT-MCNet, outshines prior leading-edge deep networks in
terms of classification accuracy and computational costs. Notably,
VT-MCNet reaches an exceptional cumulative classification rate
of up to 99.24%, while the state-of-the-art method, even with
higher computational complexity, can only achieve 99.06%.
Author's Homepage:
http://people.tcd.ie/phamqDescription:
PUBLISHED
Author: Pham, Viet
Type of material:
Journal ArticleCollections
Series/Report no:
IEEE Communications Letters28
1
Availability:
Full text availableKeywords:
Convolutional neural network, Wireless communications, Vision transformers, Modulation classificationSubject (TCD):
TelecommunicationsDOI:
https://doi.org/10.1109/LCOMM.2023.3336985ISSN:
1089-7798Metadata
Show full item recordLicences: