Thesis topic:

Explainable AI for Cybersecurity of the Industrial Internet of Things

  • Supervisor: Mubashar Iqbal
    • contact: mubashar.iqbal@ut.ee
  • Developing AI models that provide transparent explanations for their decisions in cybersecurity applications. Explainable Artificial Intelligence (XAI) is an emerging field within AI and machine learning. It focuses on developing models and systems that provide understandable and interpretable explanations for their decisions and actions. In cybersecurity, XAI is crucial in enhancing the transparency and trustworthiness of AI-driven security solutions. Many AI and machine learning models, such as deep neural networks, are highly complex and function as "black boxes." They provide results, but understanding how and why they reached those conclusions is often challenging. In cybersecurity, decisions must be explained, especially regarding threat detection, anomaly identification, and risk assessment. Understanding why a particular action is flagged as malicious or risky is crucial for security analysts and system administrators.
  • References:
    • ENIGMA: An explainable digital twin security solution for cyber-physical systems By S. Suhail
    • Explainable Artificial Intelligence for Cybersecurity by D. K. Sharma

<< back