Research

I am an active researcher in dependability plus cyber security, according to my DBLP profile or my Google Profile.

Student area

Access to the student area.

Focus

My main research application domains are critical and smart infrastructure and healthcare.

I'm interested in the following research topics:

  • Software and Application Security (AppSec)
    • Cyber security in Software Development Life-Cycle (SDLC)
    • Domain Driven Design (DDD) applied to cyber security
    • Vulnerability life-cycle and impact on software quality
  • Attack Modelling Techniques (AMT)
    • Reasoning and modelling adversarial behaviours
    • Cyber Threat Intelligence (CTI) and modelling (eg, STIX™)
    • Threat Hunting
      • Methods and tools for intrusion detection
  • Tackling large longitudinal near realtime datasets for cyber security
    • Time series data obtained from sensing/monitoring infrastructure, etc.

 

The Cybersecurity and Infrastructure Security Agency (CISA/US) states that "Cyber security is the art of protecting networks, devices, and data from unauthorized access or criminal use and the practice of ensuring confidentiality, integrity, and availability of information".

I would like to stress out that if one wants to be successful in cybersecurity, one must understand and appreciate technology, which is at the base of everything we want to protect.

Datasets

You will find next a non-comprehensive list of datasets for research (or other purposes, like learning how to tackle large datasets, and so on).

Terminology

Get familiarised with the following concepts:

  • Research ethics
  • Methodogy (and following one)
  • Impact, finding good venues, etc.
  • Repeatability* (Same team, same experimental setup)
  • Reproducibility (Different team, different experimental setup)
  • Replicability (Different team, same experimental setup)
*Link at ACM

Reproducibility refers to the ability of different experts to produce the same results from the same data.

Repeatability refers to the ability to repeat the assessment in the future, in a manner that is consistent with and hence comparable to prior assessments—enabling the organization to identify trends.

On reproducibility in computing

Ten Simple Rules for Reproducible Computational Research:
  • Rule 1: For Every Result, Keep Track of How It Was Produced
  • Rule 2: Avoid Manual Data Manipulation Steps
  • Rule 3: Archive the Exact Versions of All External Programs Used
  • Rule 4: Version Control All Custom Scripts
  • Rule 5: Record All Intermediate Results, When Possible in Standardized Formats
  • Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds
  • Rule 7: Always Store Raw Data behind Plots
  • Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected
  • Rule 9: Connect Textual Statements to Underlying Results
  • Rule 10: Provide Public Access to Scripts, Runs, and Results

Other interests

I'm interested in a plethora of topics, ranging from:

  • Non-Functional Properties (NFP) of Systems
    • Major areas of interest: dependability plus cyber-security, performance, fault-tolerance, usability
  • Quantitative Performance Evaluation of Systems
    • Modelling & Simulation (M&S): abstractions, mapping, parametrisation, accreditation, multiple scenario "what-if" analysis, statistical analysis
    • Analytic Modelling: Markov processes and structured Markov Chains
    • Monitoring: instrumentation and impact on performance, novel approaches, techniques, and methods

Engagements

Check out my start page for more information on current and past engagements.

I do have other interests besides research, so please check out my interests webpage for more information.