CHENG YE
Logo A CFC@LDN Fan

Hi, my name is CHENG YE(程烨,程ヨウ). I am now a first year master student at the Kyoto University, where I conduct research in the Data Engineering and Platform Research Group, advised by Prof. Kazuyuki Shudo. I received my B.Eng. degree from the Kansai University, advised by Assoc. Prof. Adachi Naotoshi.

My research interests lie broadly in the Federated Learning and Blockchain. I am also interested in climbing, football, and working out.


Education
  • Kyoto University
    Kyoto University
    graduate school of informatics
    Master Student
    April. 2026 - March. 2028
  • Kansai University
    Kansai University
    B.Eng. :Department of Civil, Environmental and Applied Systems Engineering
    April. 2021 - March. 2026
Language
  • Chinese
    Chinese
    native
  • Japanese
    Japanese
    conversational
  • English
    English
    read & listen
News
2026
🎉Gofa get accepted by ICECET2026
Mar 18
Finished my undergraduate dissertation:GOFA
Feb 24
Back to Nanjing,China
Feb 24
2025
Moved to Kyoto!Bye Bye Osaka,👋Gonna miss you.
Nov 26
Gonna move to Kyoto!!⛩ Looking forward to new life!✌️
Nov 17
Selected Publications (view all )
GOFA: Gradient-Oriented Backdoor Attack in Vertical Federated Learning
GOFA: Gradient-Oriented Backdoor Attack in Vertical Federated Learning

Ye CHENG, Adachi Naotoshi

International Conference on Electrial Computer and Energy Technologies 2026 ICECET2026

Vertical federated learning (VFL) enables multiple organizations with disjoint feature spaces and overlapping sample identities to collaboratively train machine learning models without sharing raw local data. Despite this privacy-preserving paradigm, VFL remains vulnerable to backdoor attacks. In particular, a malicious passive party can inject carefully crafted triggers into local inputs or intermediate embeddings, causing targeted mispredictions during inference. Existing VFL backdoor attacks (e.g., BadVFL) typically assume that the malicious client has additional knowledge of task labels, which conflicts with the core privacy assumptions of VFL. In this paper, we propose GOFA, a gradient-oriented backdoor attack for VFL. GOFA leverages server-provided gradient feedback to construct a poisoned dataset and applies adversarial-example techniques (e.g., FGSM) to mask original features and strengthen trigger learning. Experiments on CIFAR-10 and UCI-HAR demonstrate the effectiveness of our method across multiple settings.

GOFA: Gradient-Oriented Backdoor Attack in Vertical Federated Learning

Ye CHENG, Adachi Naotoshi

International Conference on Electrial Computer and Energy Technologies 2026 ICECET2026

Vertical federated learning (VFL) enables multiple organizations with disjoint feature spaces and overlapping sample identities to collaboratively train machine learning models without sharing raw local data. Despite this privacy-preserving paradigm, VFL remains vulnerable to backdoor attacks. In particular, a malicious passive party can inject carefully crafted triggers into local inputs or intermediate embeddings, causing targeted mispredictions during inference. Existing VFL backdoor attacks (e.g., BadVFL) typically assume that the malicious client has additional knowledge of task labels, which conflicts with the core privacy assumptions of VFL. In this paper, we propose GOFA, a gradient-oriented backdoor attack for VFL. GOFA leverages server-provided gradient feedback to construct a poisoned dataset and applies adversarial-example techniques (e.g., FGSM) to mask original features and strengthen trigger learning. Experiments on CIFAR-10 and UCI-HAR demonstrate the effectiveness of our method across multiple settings.

All publications