Skip to main content
Start of main content

News

Study tests if AI can help fight cybercrime

Charles Darwin University academics led a study to see if Artificial Intelligence can help improve cybersecurity testing.
Charles Darwin University academics led a study to see if Artificial Intelligence can help improve cybersecurity testing.

Artificial Intelligence (AI) could become a crucial asset to fight the growing global risk of cybercrime, a new study with Charles Darwin University (CDU) has found.

The study, led by researchers from CDU’s Energy and Resources Institute alongside Christ Academy Institute for Advanced Studies in India, examined if generative AI (GenAI) could be used in penetration testing, known as pentesting, which is a cybersecurity exercise aimed at identifying weak spots in a system’s defences. 

Researchers used ChatGPT to run a series of pentesting activities in reconnaissance, scanning, vulnerability assessments, exploitation, and reporting activities. 

Prompts included trying to anonymously log into a server and download files, inspect source codes of webpages, and find data embedded within an archive. 

Co-author and CDU Senior Lecturer in Information Technology Dr Bharanidharan Shanmugam said the purpose of the study was to explore whether AI could be used to automate some pentesting activities, with the results showing ChatGPT had enormous potential.

“In the reconnaissance phase, ChatGPT can be used for gathering information about the target system, network, or organisation for the purpose of identifying potential vulnerabilities and attack vectors,” Dr Shanmugam said. 

“In the scanning phase, ChatGPT can be used to aid in performing detailed scans of the target particularly their network, systems and applications to identify open ports, services, and potential vulnerabilities.

“While ChatGPT proved to be an excellent GenAI tool for pentesting for the previous phases, it shone the greatest in exploiting the vulnerabilities of the remote machine.”

Dr Shanmugam added while the technology could revolutionise pentesting, use of AI to improve cybersecurity must be strictly monitored. 

“Organisations must adopt best practices and guidelines, focusing on responsible AI deployment, data security and privacy, and fostering collaboration and information sharing,” he said. 

“By doing so, organisations can leverage the power of GenAI to better protect themselves against the ever-evolving threat landscape and maintain a secure digital environment for all.”

Generative AI for pentesting: the good, the bad, the ugly was published in the International Journal of Information Security. 

Related Articles

  • Young South Asian man in blue shirt wearing surgical gloves and pointing a syringe like it is a gun.

    CDU students pitch sustainable solution to medical waste

    Charles Darwin University have participated in their first international Urban Innovation and Entrepreneurship Competition, with students pitching a sustainable solution to the growing problem of medical waste.

    Read more about CDU students pitch sustainable solution to medical waste
  • Man sitting on desk surrounded by books in a light filled room.

    Mask on or off? Study uncovers effects of face masks on stuttering

    A study by Charles Darwin University has leveraged the unique context of mandatory mask-wearing during the COVID-19 pandemic to explore the relationship between stuttering, anxiety-related safety behaviours, and the challenge of maintaining open communication.  

    Read more about Mask on or off? Study uncovers effects of face masks on stuttering
  • health device hacking

    Health monitoring devices at risk of being hacked, study shows

    Billions of people around the world are using internet-connected medical devices to monitor their health, but could be putting themselves at risk of hackers using their data for unhealthy reasons according to a new cybersecurity study. 

    Read more about Health monitoring devices at risk of being hacked, study shows
Back to top