Abstract | Defending against botnets has always been a cat and mouse game. Cyber-security researchers and government agencies attempt to detect and take down botnets by playing the role of the cat. In this context, a lot of work has been done towards reverse engineering certain variants of malware families as well as understanding the network protocols of botnets to identify their weaknesses (if any) and exploit them. While this is necessary, such an approach offers the botmasters the ability to quickly counteract the defenders by simply performing small changes in their arsenals.
We attempt a different approach by actually taking the role of the Botmaster, to eventually anticipate his behavior. That said, in this presentation, we present a novel computational trust mechanism for fully distributed botnets that allows for a resilient and stealthy management of the infected machines (zombies). We exploit the highly researched area of computational trust to create an autonomous mechanism that ensures the avoidance of common botnet tracking mechanisms such as sensors and crawlers. In our futuristic botnet, zombies are both smart and cautious. They are cautious in the sense that they are careful with whom they communicate with. Moreover, they are smart enough to learn from their experiences and infer whether their fellow zombies are indeed who they claim to be and not government agencies' spies. We study different computational trust models, mainly based on Bayesian inference, to evaluate their advantages and disadvantages in the context of a distributed botnet. Furthermore, we show, via our experimental results, that our approach is significantly stronger than any technique that has been seen in botnets to date. Finally, we step out of the adversarial perspective and touch the topic of countermeasures against our own approach. |
---|