Banner

User Tools

Site Tools


Action disabled: source
en:projetos

Research Projects

In Computer Security

Defense against malicious codes

Malicious code attacks are increasingly frequent and sophisticated. This research project aims to better understand the performance of modern malware via behavioral analysis and identification of patterns of infection (suspicious interactions between malware and the operating system, the network, and other programs), in order to create new defense mechanisms. Such mechanisms include frameworks for static and dynamic analysis, transparent debuggers (assisted by hardware), signature generators and effective heuristics and application of data mining / machine learning techniques for detection and classification. It is financed by CNPq (Universal 2014), CAPES / DPF (Pro-Forenses Program), institutional grants and technology companies.

Faculty: Prof. André Grégio.

Hardware-based OS security

In recent years, several hardware resources have emerged to assist in the construction of security mechanisms for the operating system and applications. Examples of these features are the Trusted Platform Module (TPM), cryptographic processors and, more recently, Intel's SGX (Software Guard Extensions) extension. The SGX extension allows the definition of a secure (encrypted) memory area that can only be accessed by the processor when executing code in a specific area. Through SGX it is possible to protect a program and its data from other programs and from the OS kernel, which makes it possible to safely execute sensitive code on third party computers or in computer clouds. Our research interest is in the understanding of SGX technology and in the construction of operating system abstractions that simplify its use.

Faculty: Prof. Carlos Maziero.

Security in the Future Internet

Information-centric networks (ICN) are a new network paradigm in which content is located and accessed through their names, and not by IP addresses (as in current networks). ICNs are particularly interesting for the diffusion of media, such as audio and video. In these networks, the usual information protection techniques, such as SSL secure channels, are no longer applicable, due to the intensive use of distributed caches for content. Our research interest in this context concerns the techniques of protection and access control to content in this new type of network.

Faculty: Prof. Carlos Maziero.

Security in wireless networks

To develop security solutions for wireless networks, including mobile ad hoc networks and networks tolerant to delays and disconnections. Security solutions include design of key management schemes, analysis of protocols under malicious attacks, reputation and trust schemes, etc.

Faculty: Luis Albini.

In Distributed Systems

Autonomic and Scalable Algorithms for Building Resilient Distributed Systems

Organizations and individuals increasingly depend on distributed systems to execute several tasks of different natures. Seamless distributed services have become more complex and at the same time have strong requirements in terms of reliability. In this project we propose to work on the development of autonomic algorithms for building resilient distributed systems. The construction of autonomic self-managing distributed systems presents a number of challenges, particularly in dynamic environments where large numbers of resources are discovered or aggregated on-demand and are subject to unpredictable loads, failures or off-time. In distributed systems subject to faults, a primary task for system self-adaptation is the detection of the faulty elements.

VCube (Virtual Cube) is a distributed diagnosis algorithm that organizes the system processes on a virtual hypercube topology, with several logarithmic properties. In this project we plan to work on several promising topics that also rely on the power of autonomic self-management and self-reconfiguration strategies in the presence of faults to allow the construction of resilient large-scale distributed algorithms using VCube, including quorum-based mutual exclusion, data replication, and atomic broadcast. We also plan to adapt the VCube algorithm itself to run even if system diagnosis is not perfect.

Besides the theoretical contribution, which include the specification and proofs of the properties of each of these four proposed contributions, we plan to implement all these proposals in a cloud computing setting, so that these implementations themselves can be considered to be solid practical contributions.

Faculty: Prof. Elias Duarte Jr.

Scalable Dependability

This project aims at designing novel techniques for scalable state-machine replication. In this respect, we have designed and implemented two execution models, P-SMR and S-SMR, as described in the project proposal. This project will address three fundamental aspects of P-SMR and S-SMR: (a) automated and semi-automated conflict detection; (b) recovery in scalable and parallel SMR; and © dynamic reconfiguration in scalable and parallel SMR. Automated and semi-automated conflict detection. In its current form, P-SMR requires the service designer to identify command dependencies by means of manual inspection. The designer must provide a structure that identifies which pairs of commands can be executed concurrently and which pairs must be executed sequentially. We intend to investigate approaches to automated conflict detection.

Recovery in Scalable and Parallel SMR: Although recovery is crucial in the design of fault-tolerant systems, in the context of state-machine replication minor advances have been observed. Recent literature reviewed the literature and pointed out weaknesses of the common durability techniques (logging, checkpointing and state transfer) applied to the SMR model. Further, discussions can be found on challenges and performance limitations of checkpointing in practical SMR implementations. No approach in the literature has proposed recovery procedures for the multithreaded and distributed versions of SMR. Dynamic reconfiguration in Scalable and Parallel SMR Dynamic reconfiguration is the ability of a system to change its membership on-the-fly , that is, members can join and leave the system during execution, as opposed to shutting down the system and then restarting it with the new configuration. Dynamic reconfiguration is an essential property in environments in which membership may change often and high availability is needed. Although reconfiguration has been studied in the context of state-machine replication under relatively stable conditions, existing algorithms are not efficient if the system is large and its members change often.

Faculty: Prof. Elias Duarte Jr.

Monitoring, Routing, and Computing in Arbitrary Topology Networks

The Internet was not originally designed to serve the current immense number of users, nor to support the huge range of applications that run on it. Very significant efforts have been made all over the planet to make the so-called network testbeds, or simply testbeds, that arose with the objective of providing infrastructures to support experimental research in what is conventionally called “Internet of the Future”. In conjunction with network virtualization technologies, researchers can build large-scale networks on which to perform experiments on heterogeneous networks and under realistic conditions.

This research plan includes the development of monitoring and management strategies for experiments on the PlanetLab testbed, which allow the dynamic maintenance of the set of selected nodes in order to monitor the execution of experiments. In the same context, the work plan also aims to investigate solutions for “network programmability”, allowing users to access the configuration of the requirements they need to use the network. A robust network necessarily depends on robust routing strategies, which are able to recover quickly after failures and events in the network topology. Robust routing is essential for there to be realistic confidence in the functioning of the network, in particular the Internet. The definition of robust routing algorithms taking into account the new technologies available, in particular SDN (Software Defined Netwoks) and NFV (Network Function Virtualization) is foreseen in this project.

Faculty: Prof. Elias Duarte Jr.

In Operating Systems

Emulation of I/O devices using time freezing in virtualized environments

The manipulation of time in virtual machines allows the emulation of devices with arbitrary speed and even infinite speeds. This research project proposes and works with the concept of time freezing applied to disk emulation. The idea of freezing time is that you can stop the time of the virtual machine while running the emulation, schedule the responses in the desired time and resume processing. This research also proposes the implementation of a prototype of this approach based on the KVM virtual machine hypervisor.

Faculty: Prof. Luis Carlos Erpen de Bona.

Distributed storage systems in P2P networks

Digital archiving systems are intended to preserve a large volume of data to be accessed in the future. Digital libraries, Internet applications such as e-mail, photo sharing and Web page files are some examples of services that need these systems. The economic viability of digital archiving systems depends on the low cost of storing data. Thus, the replication of information in multiple storage repositories, organized by common and low-cost machines, is an alternative commonly used to ensure the long-term archiving of information. In this context, Peer-to-Peer (P2P) networks appear as a promising approach, since they are highly scalable for data distribution and retrieval. However, the P2P model of computing per se does not address issues related to high availability and data reliability in the manner required in this type of environment. For this, it is necessary to establish replication strategies that will guarantee such requirements.

Faculty: Prof. Luis Carlos Erpen de Bona.

OS resource management in virtualized systems

The virtualization of operating systems is a fundamental technology for building large-scale systems, such as computing clouds. The software responsible for virtualization is called a hypervisor; it maps the physical resources of the real machine, such as CPU, memory and disk space, into virtual resources, which can be accessed by operating systems and virtualized applications. Correct and efficient management of physical and virtual resources is essential for the good performance and security of virtualized systems. This project aims to: a) better understand the functioning of the mechanisms implemented by hypervisors and propose improvements in these mechanisms; b) build models and distributed mechanisms for managing physical and virtual resources in computational clouds.

Faculty: Prof. Carlos Maziero.

en/projetos.txt · Last modified: 2020/03/16 14:21 by maziero