Reading list

Virtualizzazione (UniMoRe)

  • Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer†, Ian Pratt, Andrew Warfield "Xen and the Art of Virtualization"
    Descrizione dettagliata dell'architettura di Xen, uno dei principali hypervisor. Sostanzialmente di natura introduttiva, il lavoro spiega le principali tecniche dietro lo sviluppo di un hypervisor efficiente.
  • Christopher Clark, Keir Fraser, and Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach, Ian Pratt, Andrew Warfield "Live Migration of Virtual Machines"
    Descrizione dettagliata dei meccanismi attuali di live migration delle macchine virtuali.
    Una descrizione delle principali tecniche difensive in ambito virtuale (gruppo di ricerca dell'Università del Michigan).
  • Kenichi Kourai, Shigeru Chiba "HyperSpector: Virtual Distributed Monitoring Environments for Secure Intrusion Detection"
    Gli autori propongono un sistema distribuito di monitoraggio per l'intrusion detection in un contesto virtualizzato. Il paper dettaglia inoltre le problematiche principali legate allo sviluppo di un tale sistema.
  • Yi Wang, Eric Keller, Brian Biskeborn, Jacobus van der Merwe, Jennifer Rexford "Virtual routers on the move: live router migration as a network-management primitive"
    Il paper illustra le tecniche di migrazione live di servizi (nel caso in questione, router) nell'ambito del loro processo di gestione. Può essere un punto di riferimento nel progetto di strategie di migrazione a seguito di attacchi.

Detection-Reaction (UniTN)

AM: Durante il nostro incontro, avevo introdotto il concetto di Anonymous Auditing. Ecco alcuni articoli correlati.

Un paio di articoli che avremmo dovuto scrivere noi:
  • Jelasity, M. and Bilicki, V. Towards automated detection of peer-to-peer botnets: On the limits of local approaches. USENIX Workshop on Large-Scale Exploits and Emergent Threats, 2009.
    • ABSTRACT: State-of-the-art approaches for the detection of P2P botnets are on the one hand mostly local and on the other hand tailored to specific botnets involving a great amount of human time, effort, skill and creativity. Enhancing or even replacing this labor-intensive process with automated and, if possible, local network mon- itoring tools is clearly extremely desirable. To investigate the feasibility of automated and local monitoring, we present an experimental analysis of the traffic dispersion graph (TDG)—a key concept in P2P network detection—of P2P overlay maintenance and search traffic as seen at a single AS. We focus on a feasible scenario where an imaginary P2P botnet uses some basic P2P techniques to hide its overlay network. The simulations are carried out on an AS-level model of the Internet. We show that the visibility of P2P botnet traffic at any single AS (let alone a single router) can be very limited. While we strongly believe that the automated detection and mapping of complete P2P botnets is possible, our results imply that it cannot be achieved by a local approach: it will inevitably require very close cooperation among many different administrative domains and it will require state-of-the-art P2P algorithms as well.
    • AM: This paper is referred by the BotGrep paper below, and they claim that the BotGrep technique can detect this kind of networks, based on the graph structure. This is not against the result of Jelasity's paper, because he actually was calling for "fight fire with fire and start to devote serious efforts to the consideration of P2P infrastructures and algorithms for automated detection".

  • Márk Jelasity, Vilmos Bilicki: Scalable P2P Overlays of Very Small Constant Degree: An Emerging Security Threat. SSS 2009: 399-412
    • ABSTRACT: In recent years P2P technology has been adopted by Internet-based malware as a fault tolerant and scalable communication medium for self-organization and survival. It has been shown that malicious P2P networks would be nearly impossible to uncover if they operated in a stealth mode, that is, using only a small constant number of fixed overlay connections per node for communication. While overlay networks of a small constant maximal degree are generally considered to be unscalable, we argue in this paper that it is possible to design them to be scalable, efficient and robust. This is an important finding from a security point of view: we show that stealth mode P2P malware that is very difficult to discover with state-of-the-art methods is a plausible threat. In this paper we discuss algorithms and theoretical results that support the scalability of stealth mode overlays, and we present realistic simulations using an event based imple- mentation of a proof-of-concept system.
    • AM: This paper shows that the stealth technique hinted in the previous one is indeed practical.

  • Shishir Nagaraja, Prateek Mittal, Chi-Yao Hong, Matthew Caesar, Nikita Borisov. BotGrep: Finding P2P Bots with Structured Graph Analysis. USENIX Security Symposium, August 2010.
    • ABSTRACT: A key feature that distinguishes modern botnets from earlier counterparts is their increasing use of structured overlay topologies. This lets them carry out sophisticated coordinated activities while being resilient to churn, but it can also be used as a point of detection. In this work, we devise techniques to localize botnet members based on the unique communication patterns arising from their overlay topologies used for command and control. Experimental results on synthetic topologies embedded within Internet traffic traces from an ISP’s backbone network indicate that our techniques (i) can localize the majority of bots with low false positive rate, and (ii) are resilient to incomplete visibility arising from partial deployment of monitoring systems and measurement inaccuracies from dynamics of background traffic.
    • AM: This paper adopts graph-analysis techniques (clustering based on SybilInfer algorithm) to detect potential botnet communities. Two approaches are discussed: a centralized one, where information about the communication graph is collected in a single point, and a decentralized one, where the graph analysis is partitioned among different ISPs also to preserve the privacy. To explore further: the behavior of recent botnets like Storm, Peacomm and Conficker, that are based on structured overlay networks, and the traces that are used in this paper - are they avaialable to everyone?

Articoli letti, ma di scarso interesse: