We are seeking research students to undertake the exciting projects described below, so if you’re interested in a project please contact the potential supervisor.
Check out our postgraduate study options and the CDU future research students page. To apply for a scholarship, please see our scholarships page.
If you don’t see the exact project for you but are interested in a particular topic, don't hesitate to contact a researcher with expertise.
Current projects
Analysis of malware for discovering authorship features
This project aims to identify the authors behind malware by examining the clues left in malware, specifically authorship information.
Studies in forensic authorship attribution that link attacks to specific groups or individuals require highly skilled analysts who need access to vast amounts of data in order to apply intelligence analytic methods that allow for evidence extraction; additionally, these analysts must have knowledge of malicious binaries mechanisms, including previous incidents and of crime-ware toolkits in order to make a set of judgement calls: (1) by whom/from where was the cyber malicious attack perpetrated?; and (2) depending on the level of certainty, should that piece of information be shared for further action.
This research is important as it is the first systematic application of authorship attribution techniques to binary malware.
Success in this project would lead to better intelligence and countermeasures against malware and assist law enforcement agencies in tracking down the criminals behind these attacks.
This application is applying for funds to purchase a system to analyse malware that provides a better behavioural decomposition of malware than freely available programs.
Explainable AI for cyber security
In the past few years, artificial intelligence (AI) techniques have been implemented in almost all verticals of human life. However, the results generated from the AI models often lag explainability.
AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision.
Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualise the results generated with optimum transparency.
The decision-making process resulting in these judgement calls in cyber security can be complex, stressful and/or cognitively loading for the analyst, potentially leading to human error, such as failing to send a comprehensive picture of all the incident components up the chain of command, which degrades the situational awareness of the organisation and makes it prone to further cyber-attacks.
Lack of interpretability can affect model performance, prevent systematic model tuning, and reduce algorithm trustworthiness.
Resolving the challenge in cyber security is of importance because deterrence is virtually impossible if analysts cannot identify and attribute malicious data to the adversary.
The solution resulting from human capacity limitations may be to integrate the analyst with a machine agent for optimised cyber sensemaking activities culminating inaccurate attributions.
The aim of this project is to develop a recommender system that is explainable and interpretable; that is, trustworthy.
Human centric security
One of the fundamental mistakes is approaching security as purely an IT issue.
The human and technological aspects of cyber security must be addressed as the deception is a key feature of human social interaction ( i.e. social engineering) and the other aspect that the AI output ( i.e. Blackbox models in AI) is not supportive of effective human-machine teaming because they are not designed to explain to their human users the manner in which they arrived at their conclusions, thereby causing system distrust and disuse.
A solution to this human factors challenge is to ensure that the development of a system is conducted by a team who has the expertise to implement both technical and human considerations.
This research encompasses both a technological and human factors perspective in the study of human-machine teaming in cybercrime evolution.
This project focuses on human centric cyber security as involving all aspects of cyber security, with a particular focus on the human involvement in the system and processes. That is, understanding how humans can be a risk to an organisation to make a substantial impact on user protection.
Biomedical and health informatics
Due to the advancements in the Internet of Medical Things (IoMT), wearable devices and remote monitoring of patients is possible like never before.
Machine learning and deep learning techniques help the doctors immensely in remotely diagnosing the patients by learning the patterns from the data generated through these devices.
The main problem with traditional machine learning (ML)/deep learning (DL) models is that the data from the individual devices, sensors, and wearables from patients have to be transferred to the central servers to train the data using the ML/DL models.
Due to the sensitive nature of healthcare data, the aforementioned approach of transferring the patients’ data to the central servers may create serious security and privacy issues.
In the medical field, previous patients' cases are extremely private as well as intensely valuable to the current disease diagnosis. Therefore, how to make full use of precious cases while not leaking out patients' privacy is a leading and promising work, especially in the future privacy-preserving intelligent medical period.
In this project, we investigate how to securely invoke patients' records from past case databases while protecting the privacy of both currently diagnosed patients and the case database and construct a privacy-preserving medical record searching scheme.
This theme focuses on recent advances in the field of biomedical and health informatics where information and communication technologies intersect with healthcare and biomedicine.
Privacy and security in distributed edge computing and evolving IoT
Recent advances in artificial intelligence, edge computing, and big data, have enabled extensive reasoning capabilities at the edge of the network.
Edge servers are now capable of extracting meaningful analytics from IoT nodes, which give insights about unprecedented changes of data-driven economy that find applications in diverse sectors ranging from smart manufacturing and smart transportation to predictive maintenance and precision healthcare.
Despite this ongoing advancement, there are growing concerns regarding the privacy of data providers when they grant edge applications direct access to their embedded sensors.
Data mining on genuine data could be harmful to data privacy. For example, data mining on time-series data taken from motion sensors, microphones, and GPS sensors could reveal users' activities, demographics, attributes and daily interactions. This could potentially lead to security/privacy concerns in many participatory and opportunistic crowdsensing applications, where a large group of individuals having mobile devices capable of sensing and computing collectively share data and extract information to measure, map, analyze, estimate or infer any processes of common interest.
While privacy preservation has not been the initial focus of traditional data analytics on edge servers, when used in domains such as cyber security, there are incentivised, malicious adversaries present in the system willing to game and exploit edge processing vulnerabilities.
This theme focuses on solutions that leverage techniques and insights from the domains of artificial intelligence, edge computing, and big data to resolve privacy and security challenges in distributed edge computing and evolving IoT applications.
Fuzzy hashing and deep learning in malware
This theme utilizes fuzzy hashes as input to identify similarities at the malware binary code section-level (using a new generic block-based distance computation algorithm) to analyse its technical and textual aspects. Then, a deep learning methodology to better identify similarities to improve detection quality and scale of deployment.
Using this analytical technique of both deep learning and fuzzy hashing will distil the information useful for identifying the virtual cyber actors and mapping in cyber and physical space.
Federated learning for cybersecurity
Federated learning (FL) is a recent development in artificial intelligence, which is typically based on the concept of decentralised data.
As cyberattacks are frequently happening in the various applications deployed in real-time, most industrialists are hesitating to move forward in adopting the technology of the Internet of Everything.
This project aims to provide an extensive study on how FL could be utilised for providing better cybersecurity and preventing various cyberattacks in real-time. We present an extensive survey of the various FL models currently developed by researchers for providing authentication, privacy, trust management, and attack detection.
Trust, security and privacy for big data
Data has revolutionised the digital ecosystem. Readily available large datasets foster AI and machine learning automated solutions.
The data generated from diverse and varied sources including IoT, social platforms, healthcare, system logs, bio-informatics, etc. contribute to and define the ethos of Big Data which is volume, velocity and variety.
Data lakes formed by the amalgamation of data from these sources require powerful, scalable and resilient storage and processing platforms to reveal the true value hidden inside this data mine.
Data formats and their collection from various sources not only introduce unprecedented challenges to different domains including IoT, manufacturing, smart cars, power grids etc., but also highlight the security and privacy issues in this age of big data.
Security and privacy in big data are facing many challenges, such as generative adversary networks, efficient encryption and decryption algorithms, encrypted information retrieval, attribute-based encryption, attacks on availability, and reliability.
Providing security and privacy for big data storage, transmission, and processing have been attracting much attention in all big data related areas.
Together big data and AI have created profound opportunities in every field, enabling the discovery of previously hidden patterns (including zero-day) and the development of new insights to inform decisions.
At the same time, protecting the information from cyber threats remains an urgent priority so using big data tools and AI techniques to enhance cybersecurity is a natural development.
Cybersecurity governance and management practices
The multiplication of internal and external data and increased digital management, collaboration, and sharing platforms expose organisations to ever-growing risks.
Understanding the threats, management practice, assessing the risks, adapting the organisation, selecting and implementing the appropriate controls, and implementing a management system are the activities required to establish proactive security governance that will provide management and customers with the assurance of an effective mechanism to manage risks.
Indigenous Australians and cyber security challenges
Australian Indigenous communities operate in a strong culture that values community, country and spirituality.
Sharing devices and identities among kin is common and linked to demand sharing practices.
Sharing devices and identity lead to security (safeguarding of data) and privacy (safeguarding of user identity) related problems.
This includes illicit use of banking, social media accounts, and government e-services.
This study adopts an Indigenous methodology that puts the Indigenous perspective at the centre of the study, hence, the research use ‘yarning’ to facilitate in-depth discussions in a relaxed and open manner.
Leveraging forensic linguistics for cybercrime, cyberterrorism, and intelligence
Innovations in linguistic research into cybercrime, cyberterrorism and cybersecurity have proved indispensable for security practitioners, intelligence agencies and law enforcement.
Through our cutting-edge interdisciplinary projects that harness, inter alia, statistical and automated semantic techniques for in-depth analysis of pressing digital challenges, we yield actionable insights. Focussing on cyber-terrorist communication, hate speech, digital deviance and disinformation campaigns in the digital realm, we meticulously dissect language patterns to serve a multitude of critical purposes. These include establishing evidence of language-related crimes, authorship identification, identifying threats and the likelihood of engagement in terrorist activity, maximising the yield of intelligence gathered from digital communication channels, and linguistically profiling individuals and their online behaviours and motives.
Our work not only safeguards against online threats but also advances our understanding of the criminal nature of language use, ultimately enhancing digital security and fostering a safer online environment.
Join Us
If you are interested in doing a Ph.D. or Master's by research on a particular topic, please contact an ACCI Researcher with expertise in that topic.