Resume
Click the pdf button on the right to download my Resume file.
Table of contents
Basics
Name | Zhiyu Xie |
zhiyuxie@stanford.edu | |
Url | https://zhiyuxie.top/ |
Work
-
2024.06 - 2024.08
Intern
Two Sigma, Security Incident Response Team
LLM Security Application Reviewer
- Developed the LLM Security Application Reviewer to streamline the security assessment process for Windows applications.
- Implemented a Flask-based web service with a React frontend and a Postgres database.
- Integrated security check APIs (e.g., VirusTotal, Signify) to analyze application executables.
- Leveraged LLMs to extract security-related information from application manuals, enhancing performance with Retrieval-Augmented Generation (RAG), semantic chunking, and a second-pass LLM filter.
- Deployed the service to production, reducing review time from days to 4 minutes, with an estimated annual savings of 1.25 FTE.
-
2022.06 - 2023.05
Research Assistant
UCLA PlusLab
AMR-Aware Prefix for Generation-Based Event Argument Extraction Model
- Incorporated useful external information, Abstract Meaning Representation (AMR) Graph, into generative models, which have been important for event argument extraction task because of their flexibility and efficiency.
- Proposed AMPERE, which generates AMR-aware prefixes for every layer of the generation model.
- Introduced an adjusted copy mechanism to AMPERE to help overcome potential noises.
- Our method gains 4% − 10% absolute F1 score improvements and is especially effective in low-resource settings.
-
2021.07 - 2021.09
Intern
Google, TensorFlow Lite Team
Model Distillation in TensorFlow Model Maker Library
- To further facilitate TensorFlow Lite on-device training, this work introduced model distillation method for model compression in TensorFlow Lite Model Maker library.
- Followed Object-oriented programming principles to implement model distillation for image, audio and text classification tasks, enabling users to create a fine-tuned end-to-end model on a customized dataset in just 6 lines of code.
- Carried out comprehensive experiments to show that the work can lead to a 90% reduction of parameters while maintaining a competitive and sometimes better accuracy performance compared to the teacher network.
-
2020.09 - 2021.02
Research Assistant
THUNLP
Reviewing Test Protocols of Distantly Supervised Relation Extraction
- Examined two popular relation extraction datasets (NYT10 and Wiki20) for annotation errors due to distant supervision methods adopted.
- Proposed an improved relation ontology and adopted data-cleaning, constructed manually-annotated test sets for NYT10 and Wiki20, correcting 53% wrong labels in NYT10.
- Analyzed performance differences of competitive models on manually-annotated and distantly supervised datasets.
- Our conclusion sheds light on the importance of a more accurate evaluation for relation extraction research.
Education
-
2023.09 - 2025.06 Master of Science
Stanford University, CA, United States of America
Computer Science
- GPA: 4.26/4.30
- Randomized Algorithms (A+)
- Deep Generative Models (A+)
- Distributed System (A+)
- Computer Vision (A+)
- Introduction to Cryptography (A+)
-
2019.08 - 2023.06 Bachelor of Engineering
Tsinghua University, Beijing, China
Computer Science and Technology
- GPA: 3.97/4.00 (Rank 3/202)
- Data Structures (A)
- Introduction of Theory of Computation (A)
- Operating Systems (A)
- Compiler Construction (A)
- Theory of Computer Network (A)
Languages
Chinese (Mandarin) | |
Native speaker |
English | |
Fluent |