About the Lab
AI and Secure Computing (AISeC) is a research lab based in Loyola University Chicago.
The laboratory's focus is on researching concepts that define the intersection of AI, cybersecurity, and privacy. Our research spans several core areas, including but not limited to:
Information Security
We employ advances in machine learning to information and system security.
Software Security
We study methods for software authorship and vulnerabilities.
Privacy
Authorship anonymization and behavior biometrics for authentication.
Robustness and Adversarial ML
We analyze the security properties of machine learning models.
Applied Machine Learning
We study and employ machine learning in various domains.
Malware Analysis
We utilize various characterizing features to build machine learning models to detect and analyze malware.
People
Mujtaba Nazari
Loyola University Chicago
Rachel Gordon
Loyola University Chicago
Ihab Al Shaikhli
Loyola University Chicago
Jason Luce
Loyola University Chicago
Maddie Juarez
Loyola University Chicago
Alina Zacaria
Loyola University Chicago
Nick Zappia
Loyola University Chicago
Matt Hyatt
Loyola University Chicago
Erik Pautsch
Loyola University Chicago
Research Projects
Robustness and Adversarial Machine Learning
Deep Neural Networks have achieved state-of-the-art performance in various applications. It is crucial to verify that the high accuracy prediction for a given task is derived from the correct problem representation and not from the misuse of artifacts in the data. Investigating the security proprieties of machine learning models, recent studies have shown various categories of adversarial attacks such as model leakage, data membership inference, model confidence reduction, evasion, and poisoning attacks. We believe it is important to investigate and understand the implications of such attacks on sensitive applications in the field of information security and privacy.
DL-FHMC: Robust classification
AdvEdge: AML against IDLSes
SGEA: AML against malware detectors
Soteria: Robust malware detectors
Black-box and Target-specific AML
BExploration of Black-box AML
Malware Analysis
Malware is one of the serious computer security threats. To protect computers from infection, accurate detection of malware is essential. At the same time, malware detection faces two main practical challenges: the speed of malware development and their distribution continues to increase with complex methods to evade detection (such as a metamorphic or polymorphic malware). This project utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze malware. We investigate the robustness of such machine learning models against adversarial attacks.
MLxPack
DL-FHMC: Robust malware classification
SGEA: AML against malware detectors
Soteria: Robust malware detectors
Spectral Representations of CFG
ShellCore
Continuous User Authentication
Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including mobile banking, communication, and storing private photos and files. Therefore, there is a high demand for applying usable continuous authentication techniques that prevent unauthorized access. We work on a deep learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. We addressed various aspects regarding accuracy, efficiency, and usability.
AUToSen: Continuous Authentication
Contemporary Survey on Sensor-Based Continuous Authentication
Machine Learning for Social Good
Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. We work on building a deep learning-based method for WBC classification. We proposed W-Net that we evaluated on a real-world large-scale dataset that includes 6,562 real images of the five WBC types. The dataset was provided by The Catholic University of Korea (The CUK), and approved by the Institutional Review Board (IRB) of The CUK. For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing.
W-Net: WBC Classification
Online Toxicity in Users Interactions with Mainstream Media
Children Exposure to Inappropriate Comments in YouTube
Synthetic MRI from CT
Sentiment analysis of Users' Reactions on Social Media
Machine Learning for Medical Images
This project is about synthesizing magnetic resonance images (MRI) from computed tomography (CT) simulation scans using deep-learning models for high-dose-rate prostate brachytherapy. For high dose rate prostate brachytherapy, both CT and MRI are acquired to identify catheters and delineate the prostate, respectively. We propose to build a deep-learning model to generate synthetic MRI (sMRI) with enhanced soft-tissue contrast from CT scans. sMRI would assist physicians to accurately delineate the prostate without needing to acquire additional planning MRI.
W-Net: WBC Classification
Synthetic MRI from CT
Network Security and Online Privacy
Network monitoring applications such as flow analysis, intrusion detection, and performance monitoring have become increasingly popular owing to the continuous increase in the speed and volume of network traffic. We work on investigating the feasibility of an in-network intrusion detection system that leverages the computation capabilities of commodity switches to facilitate fast and real-time response. Moreover, we explore the traffic sampling techniques that preserve flows’ behavior to apply intelligence in network monitoring and management. We also address the increased privacy concerns regarding website fingerprinting attacks despite the popular anonymity tools such as Tor and VPNs.
Traffic Sampling for NIDSs
Multi-X
Exploring the Proxy Ecosystem
Studying the DDoS Attacks Progression
Publications
Programs
SecureAI
SecureAI is a cybersecurity and privacy training program designed for AI professionals and researchers to equip them with the knowledge and skills to build AI systems that are technically sound and secure. The program is a series of 12 engaging and hands-on synchronous online sessions/workshops held every two weeks for a period of six months. The sessions are held at evening times (6:00 PM CT) to accommodate participants with busy work-day schedules. SecureAI follows an experiential training approach offered by experts in AI security, and each workshop includes practical lab activities.
SecureAI WebsiteCyberRamblers
The CyberRamblers program is Loyola University Chicago CyberCorps® Scholarship for Service (SFS) program. Four cohorts of five undergraduate students each, for a total of 20 students, will be CyberRambler scholars. Each cohort will receive the scholarship for two years. Each CyberRambler scholar will be advised and mentored to ensure adequate progress towards graduation and successful job placement. Each cohort will participate in the cybersecurity club activities and engaging in research projects. These scholarships for service will provide the necessary hands-on training and education for the scholars to start a successful cybersecurity career in the US government. The scholarship provides a stipend, tuition, and a professional development allowance for two years.
CyberRamblers WebsiteSponsors
Contact Us
Our Address
Loyola Center for Cybersecurity
306 Doyle Center, Lake Shore Campus
1052 W Loyola Ave. Chicago, IL 60626
Email Us
Dr. Mohammed Abuhamad: mabuhamad AT luc DOT edu Dr. Eric Chan-Tin: dchantin AT luc DOT edu