Sanjay Kariyappa
  • Bio
  • Papers
  • News
  • Experience
  • Recent & Upcoming Talks
    • Example Talk
  • Blog
    • ๐ŸŽ‰ Our work "Progressive Inference- Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions" was accepted at ICML 2024!
    • ๐ŸŽ‰ Our work "Information Flow Control in Machine Learning through Modular Model Architecture" was accepted at Usenix Security 2024!
    • ๐ŸŽ‰ Our work "SHAP@k- Efficient and Probably Approximately Correct (PAC) Identification of Top-K Features" was accepted for an oral presentation at AAAI 2024!
  • Projects
  • Publications
    • Information flow control in machine learning through modular model architecture
    • Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions
    • SHAP@ k: Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
    • Bounding the invertibility of privacy-preserving instance encoding using fisher information
    • Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis
    • Exploit: Extracting private labels in split learning
    • Privacy-Preserving Algorithmic Recourse
    • Bespoke cache enclaves: Fine-grained and scalable isolation from cache side-channels via flexible set-partitioning
    • Enabling inference privacy with adaptive noise injection
    • Maze: Data-free model stealing attack using zeroth-order gradient estimation
    • Noise-resilient DNN: tolerating noise in PCM-based AI accelerators via noise-aware training
    • Protecting dnns from theft using an ensemble of diverse models
    • Semantics Preserving Adversarial Examples
    • Defending against model stealing attacks with adaptive misinformation
    • Enabling transparent memory-compression for commodity memory systems
    • Improving adversarial robustness of ensembles with diversity training
    • Reducing the impact of phase-change memory conductance drift on the inference of large-scale hardware neural networks
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Experience
  • Teaching
    • Learn JavaScript
    • Learn Python

Defending against model stealing attacks with adaptive misinformation

Jan 1, 2020ยท
Sanjay Kariyappa
,
Moinuddin K Qureshi
ยท 0 min read
PDF Cite
Type
Conference paper
Publication
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Last updated on Jan 1, 2020

← Semantics Preserving Adversarial Examples Jan 1, 2021
Enabling transparent memory-compression for commodity memory systems Jan 1, 2019 →

ยฉ 2024 Sanjay Kariyappa

Published with Hugo Blox Builder โ€” the free, open source website builder that empowers creators.