Abstract | Handcrafted rule-based intrusion detection systems tend to overlook sophisticated intrusions due to unexpected cyberattacker behaviors or human error in analyzing complex control flows. Current machine learning systems, mostly based on artificial neural networks, have the inherent problem that models cannot be verified since the decisions depend on probabilities. To bridge the gap between handcrafted rule systems and probability-based systems, our approach uses genetic programming to generate rules that are verifiable, in the sense that one can confirm that the extracted pattern matches a known attack. The RulEth rules language is designed to be predictive of a packet window, which allows the system to detect anomalies in message flow. Alerts are enriched to include the root cause about the characterization as an anomalous event, which in turn supports decisions to trigger countermeasures. Although the attacks examined in this work are far more complex than those considered in most other works in the automotive domain, our results show that most of the attacks examined can be well identified. By being able to evaluate each rule generated separately, the rules that are not working effectively can be sorted out, which improves the robustness of the system. Furthermore, using design flaws found in a public dataset, we demonstrate the importance of verifiable models for reliable systems. |
---|