找回密码
 立即注册

微信扫一扫,快捷登录!

1

主题

1

帖子

13

积分

Rookie

Rank: 1

积分
13
 楼主| 发表于 2023-11-21 16:34:35 | 显示全部楼层 |阅读模式
两位导师的信息如下:

About Dr. Ahmed M. A. Sayed (aka. Ahmed M. Abdelmoniem) is a Lecturer (Research & Teaching), the equivalent of Assistant Professor, at the School of Electronic Engineering and Computer Science at Queen Mary University of London, UK. He is also the Director of the MSc Big Data Science Programme.  He leads the SAYED Systems Group and works on various topics related to Distributed Systems, Systems for ML & ML for Systems, Federated Learning, Edge/Cloud Computing, Congestion Control, and Software-Defined Networking (SDN). See more information at https://eecs.qmul.ac.uk/~ahmed/
Research His research spans inter-related disciplines of computer science and engineering with a focus on system design and optimization for machine learning systems (training and inference efficiency, distributed ML, federated learning), distributed systems (architecture design, performance analysis, resource allocation, algorithmic optimization), computer networks (traffic engineering, congestion control, performance optimization, software-defined networking), and wireless networks (routing in mobile ad-hoc and wireless sensor networks).


About Dr. Ziquan Liu is an incoming Lecturer (Research & Teaching), the equivalent of Assistant Professor, at the School of EECS at Queen Mary University of London, affiliated with Computer Vision group led by Prof. Shaogang Gong (FREng). Dr. Liu is now a Research Fellow at University College London since April 2023. Under the supervision of Prof. Antoni B. Chan, he obtained his Ph.D. in Computer Science from the City University of Hong Kong in 2023. He received a Bachelor's degree in Information Engineering and a Bachelor's degree in Mathematics from Beihang University both in 2017.  See more information at https://sites.google.com/view/ziquanliu
Research His research is focused on Al safety, including trustworthy machine learning, alignment, and explainable Al. His research has been published in top conferences and journals in the fields of machine learning and computer vision, including NeurIPS, ICLR, CVPR, IJCAI, TPAMI.

Selected publication
[1] Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alounin, Marco canini. ‘‘An Efficient Statistical-based Gradient Compression Technique for Distributed Training System’’, International Conference on Machine Learning Systems (MLSys), 2021.
[2] Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Marco Canini. ‘‘Empirical analysis of federated learning in heterogeneous environments’’. 2nd Workshop on Machine Learning and Systems - EuroMLSys, ACM EuroSys, 2022.
[3] Atal Narayan Sahu, Aritra Dutta, Ahmed M Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis. ‘‘Rethinking gradient sparsification as total error minimization’’. 35th Conference on Neural Information Processing Systems (NeurIPS), 2021.
[4] Yufei Cui, Ziquan Liu, Yixin Chen, Yuchen Lu, Xinyue Yu, Xue Liu, Tei-Wei Kuo, Miguel R.D. Rodrigues, Chun Jason Xue, and Antoni B. Chan, "Retrieval-Augmented Multiple Instance Learning", Neural Information Processing Systems (NeurIPS), 2023
[5] Ziquan Liu, Yi Xu, Xiangyang Ji, Antoni B. Chan, "TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization", IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
[6] Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Xiangyang Ji, Antoni B. Chan, Rong Jin, "Improved Fine-Tuning by Better Leveraging Pre-Training Data", Neural Information Processing Systems (NeurIPS), 2022




Opening There are CSC studentships available for 2024 Fall admission. The research topics include but are not limited to the following

Efficient training and fine-tuning of large language models
In the rapidly evolving landscape of artificial intelligence, the development of sophisticated Generative AI and Large Language Models (LLMs) has become pivotal for various applications, ranging from natural language processing to creative content generation. However, the training of these models is computationally intensive, often requiring substantial time and resources. This project will study and propose system and algorithmic optimizations to accelerate the training process for Generative AI and LLMs, addressing the challenges posed by the complexity of these models. The core focus of this research lies in the exploration and implementation of advanced parallel computing techniques, leveraging the power of distributed systems and specialized hardware accelerators. By optimizing algorithms, employing parallelization strategies, and harnessing the capabilities of GPUs, TPUs, or emerging AI-specific hardware, this project aims to significantly reduce the training time of Generative AI and LLMs, making the process more efficient and cost-effective. Furthermore, the study delves into the realm of transfer learning and explores techniques to enhance model convergence and accuracy. By leveraging pre-trained models and developing novel transfer learning methodologies, the research intends to minimize the amount of data and computational resources required for training, thereby democratizing access to cutting-edge AI technologies.


Efficient machine learning on decentralized data at scale and their privacy and bias
AI/ML systems are becoming an integral part of user products and applications as well as the main revenue driver for most organizations. This resulted in shifting the focus to bringing the intelligence towards where the data are produced including training the models on these data. Existing approaches operate as follows: 1) the data is collected on multiple servers and processed in parallel (e.g., Distributed Data-Parallel); 2) the server coordinates the training rounds and collects model updates from the clients (e.g., Federated Learning); 3) the server splits the model training between the clients and the server (e.g., Split Learning); or 4) the clients coordinate among themselves via gossip protocols (i.e., Decentralized Training). The challenges that manifest themselves are the highly heterogeneous learners, configurations, environment, communication and synchronization overheads, fairness and bias, and privacy and security. Therefore, existing approaches fail to scale with a large number of learners and produce models with low qualities and high bias at prolonged training times. It is imperative to build systems that provide high-quality models in a timely manner. This project addresses this gap by exploring novel ideas and proposing efficient and scalable ML systems for decentralized data.

Safety and reliability of generative AI and large language models
TBA
Privacy and bias in Al+healthcare, especially in the context of distributed training
TBA

请感兴趣的同学同时发送简历到ahmed.sayed@qmul.ac.uk和ziquanliu.cs@gmail.com


微信图片_20231121163416.png
欢迎来到水木紫荆书院!
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|培君水木书院-源自清华的高端学习平台 ( 闽ICP备13013230号 )

GMT+8, 2024-12-23 04:55 , Processed in 0.164573 second(s), 37 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表