Theses and Dissertations

2014

ABSTRACTS

Departamento de Informática 
Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio
Rio de Janeiro - Brazil
 

This file contains the list of the MSc. Dissertations and PhD. Thesis presented to the Departmento de Informática, Pontifícia Universidade Católica do Janeiro - PUC-Rio, Brazil, in 2014.  They are all available in print format and, according to the authors' preference, some of them are freely available for download, while others are freely available for download to the PUC-Rio community exclusively(*). 

For any requests, questions, or suggestions, please contact:
Rosane Castilho bib-di@inf.puc-rio.br

Last update: 29/MAY/2015
 

INDEX


[In construction; sometimes online versions may not be available yet]

[14_MSc_marins]
Aarão Irving Manhães MARINS. Um método para simulação de agentes para cenas de mar em produções para cinema. [Title in English: A method for massive agents simulation of sea scenes for TV / Film productions]. M.Sc. Diss. Port. Presentation: 09/01/2014. 80 p. Advisor: Bruno Feijó.

Abstract: This MSc dissertation presents a method to simulate sea scenes for TV / Film with a large number of agents of various types (vessels, ports, people ...) for productions with a high degree of realism in both image and agent's behavior.  This method uses modeling in fuzzy logic and programming in the MASSIVE system as the main approach to integrate the results with rendering systems and high-resolution images composition. An important goal of this dissertation is to create a system that facilitates the work of modelers and designers involved in the production pipeline.


[14_MSc_moralesfigueroa]
Amparito Alexandra MORALES FIGUEROA. Pré-busca de conteúdo em apresentações multimídia. [Title in English:
Prefetching content in multimedia presentations]. M.Sc. Dissert. Port. Presentation: 21/03/2014. 206 p. Advisor: Luiz Fernando Gomes Soares.

Abstract: When delivering and presenting multimedia applications through a communication network, the presentation lag could be a major and critical factor affecting the multimedia presentation quality. In a good quality presentation the synchronism is always preserved, hence all the contents are presented in a continue way according to the authoring specifications. In this dissertation, a multimedia content prefetching plan is proposed in order to minimize the presentation lag and guarantee the synchronism between the media objects, which constitute the multimedia application. The proposed mechanism regards the multimedia applications developed using the NCL declarative language and it uses the events based synchronism advantage to determinate the ideal retrieval order of the media objects and to calculate their start retrieval times. Furthermore, important issues to be considered in a prefetch ambient are raised and the different algorithms that belong to the prefetching plan are developed.


[14_PhD_leal]
André Luiz de Castro LEAL. Análise de conformidade de software com base em catálogos de requisitos não funcionais: uma abordagem baseada em sistemas multi-agentes. [Title in English: Software compliance analysis based o softgoal catalog: a multi-agents systems approach]. Ph.D. Thesis. Port. Presentation: 10/04/2014. 206 p. Advisor: Julio Cesar Sampaio do Prado Leite.

Abstract: The analysis of non-functional requirements (NFR) is a challenge and has been explored in the literature. This initiative is due to the fact of the existence of the problem of analysis the use of the NFR's operationalization in software.  In this thesis we present a method, with supporting tools and techniques, that checkes if a software complies with standards of non-functional requirements as described in a catalog, as an alternative to the NFR analysis problem.  The strategy adopted in this theses uses autonomous agents to check software compliance regarding the operationalization of an NFR, by using a knowledge base of patterns persisted in a catalog.  Initial results show that the proposed solution is applicable.  The evaluation of the validity is given by the demonstration that a partially automated method is effective in identifying compliance.  This work differs form others by linking NFRs to their effective implementation.  A method based on patterns NFRs was used in commom software, as to show the application of the proposed strategy.  An agent based framework, working with XML descripitons, for checling software compliance with respect to a NFR catalog was built.

  
[14_PhD_nunes]
Bernardo Pereira NUNES. Towards a well-interlinkedWeb through matching and interlinking approaches. [Title in Portuguese: Interligando recursos na Web através de abordagens de matching e interlinking]. Ph.D. Thesis. Eng. Presentation: 10/02/2014. 88 p.  Advisors: Marco Antonio Casanova and Wolfgang Nejdl.

Abstract: With the emergence of Linked (Open) Data, a number of novel and notable research challenges have been raised. The “openness” that often characterises Linked Data offers an opportunity to homogeneously integrate and connect heterogeneous data sources on the Web. As disparate data sources with overlapping or related resources are provided by different data publishers, their integration and consolidation becomes a real challenge. An additional challenge of Linked Data lies in the creation of a well-interlinked graph of Web data. Identifying and linking not only identical Web resources, but also lateral Web resources, provides the data consumer with richer representation of the data and the possibility of exploiting connected resources. In this thesis, we present three approaches that tackle data integration, consolidation and linkage problems. Our first approach combines mutual information and genetic programming techniques for complex datatype property matching, a rarely addressed problem in the literature. In the second and third approaches, we adopt and extend a measure from social network theory to address data consolidation and interlinking. Furthermore, we present a Web-based application named Cite4Me that provides a new perspective on search and retrieval of Linked Open Data sets, as well as the benefits of using our approaches. Finally, we validate our approaches through extensive evaluations using real-world datasets, reporting results that outperform state of the art approaches.

 
[14_MSc_passos]
Bruno Leonardo Kmita de Oliveira PASSOS. Estudo de heurísticas para problemas de escalonamento em um ambiente com máquinas indisponíveis. [Title in English: Scheduling algorithms application for machine availability constraint]. M.Sc. Diss. Port. Presentation: 16/06/2014. 90 p.  Advisors: Eduardo Sany Laber.

Abstract: Most literature in scheduling theory assumes that machines are always available during the scheduling time interval, which in practice is not true due to machine breakdowns or resource usage policies. We study a few available heuristics for the NP-hard problem of minimizing the makespan when breakdowns may happen. We also develop a new scheduling heuristic based on historical machine availability information. Our experimental study, with real data, suggests that this new heuristic is better in terms of makespan than other algorithms that do not take this information into account. We apply the results of our investigation for the asset-pricing problem of a fund portfolio in order to determine a full valuation market risk using idle technological resources of a company.

 
[14_PhD_vieira]
Bruno Lopes VIEIRA. Extending propositional dynamic logic for Petri Nets. [Title in Portuguese: Extensões de lógica proposicional dinâmica para Redes de Petri]. Ph.D. Thesis. Eng. Presentation: 28/03/2014. 101 p.  Advisors: Edward Hermann Haeusler and Gilles Doweck.

Abstract: Propositional Dynamic Logic (PDL) is a multi-modal logic used for specifying and reasoning on sequential programs. Petri Net is a widely used formalism to specify and to analyze concurrent programs with a very intuitive graphical representation. In this work, we propose some extensions of Propositional Dynamic Logic for reasoning about Petri Nets. We define a compositional encoding of Petri Nets from basic nets as terms. Second, we use these terms as PDL programs and provide a compositional semantics to PDL Formulas. Then we present an axiomatization and prove completeness regarding our semantics. Three versions of Dynamic Logics to reasoning with Petri Nets are presented: one of them for ordinary Marked Petri Nets and two for Marked Stochastic Petri Nets yielding to the possibility of model more complex scenarios. Some deductive systems are presented. The main advantage of our approach is that we can reason about [Stochastic] Petri Nets using our Dynamic Logic and we do not need to translate it into other formalisms. Moreover our approach is compositional allowing for construction of complex nets using basic ones.

 
[14_MSc_dias]
Camila Pereira DIAS. Um modelo para cobertura de notícias na Web: um estudo sobre notícias digitais. [Title in English:
A model for Web based news coverage]. M.Sc. Diss. Port. Presentation: 16/07/2014. 77 p.  Advisors: Daniel Schwabe.

Abstract: According to Tim Berners-Lee the Web is a hypermidia environment. Consequently, to publish news on a online vehicle is to create a hypertext, which implies not only on relating facts , but also indicating, through navigational links, the relations between them, and with other relevant news. Thus, it is necessary a navigational topology project that allows a better compreension of facts by the reader, and also stimulates him to keep reading. The motivation for this work is to help the journalist and, consequently, the media vehicle, to adequate their news to the Web. We propose a structure to organize news based on the concept of news coverage, that is reflected in a navigational topology. This approach is qualitatively validated through experiments in a real news site, comparing the readers’ behaviour when they face navigational topologies based or not on hypertexts.

 
[14_PhD_valadares]
Carolina VALADARES. A multiagent based context-aware and self-adaptive model for virtual network provisioning. [Title in Portuguese: Um sistema multi-agente auto-adaptativo baseado em conhecimento de contexto para gerenciamento de redes virtuais]. Ph.D. Thesis. Eng. Presentation: 09/04/2014. 93 p.  Advisor: Carlos José Pereira de Lucena.

Abstract: Recent research in Network Virtualization has focused on the Internet ossification problem whereby multiple independent virtual networks (VN) that exhibit a high degree of autonomy share physical resources and can provide services with a varying degree of quality. Thus, the Network field has taken evolutionary steps on rethinking the design and architectural principles of VN. 
However, to the best of our knowledge, there has been little investigation into the autonomic behavior of such architectures. This paper describes an attempt to use Multiagent System (MAS) principles to design an autonomic and self-adaptive model for virtual network provisioning (VNP) that fills a gap in the current Internet architecture. In addition, we provide an analysis of the requirements of self-adaptive provisioning for designing a reliable autonomic model that is able to self-organize its own resources, with no external control, in order to cope with environment changes. Such behavior will be required as the next generation Internet evolves. Through our evaluation, we demonstrate that the model achieves its main purpose of effciently self-organizing the VN, since it is able to anticipate critical scenarios and trigger corresponding adaptive plans.

  
[14_MSc_ferreira]
Cátia Maria Dias FERREIRA. Avaliação da meta-comunicação intercultural na comunicação humano-computador: um vocabulário para acessar as perspectivas culturais dos usuáros. [Title in English: Intercultural Metacommunication Evaluation in Human-Computer Interaction: A Vocabulary to Access Users' Cultural Perspectives]. M.Sc. Diss. Port. Presentation: 21/08/2014. 195 p.  Advisor: Clarisse Sieckenius de Souza.

Abstract: It is fact that cultural diversity has become a new challenge in Human-Computer Interaction. Today users can navigate almost anywhere on the Web, with no cultural and/or national boundaries, having intentional or unintentional contact with foreign cultural elements (languages, practices etc.). Therefore, the Web became a privileged place for intercultural encounters, i.e., a place where users have the opportunity to be in contact with cultural diversity directly (when interacting with other users using social networks, for example) or indirectly (when interacting with applications, which have foreign cultural features). This scenario indicates the need for an investigation about how the intercultural
metacommunication (communication about communication) of designers to users is noticed by users. Hereupon, we intend to understand how users perceive the opportunities to make contact with cultural diversity when interacting with crosscultural applications and how these perceptions can contribute in the HCI Evaluation activities of these systems. Thus, in order to investigate whether and how users express their perceptions and reactions over opportunities promoted by indirect intercultural encounters, we conducted empirical studies in which we offered a specific vocabulary (Cultural Viewpoint Metaphors). We also conducted other studies without offering any vocabulary, that is, we let users speak freely about the opportunities to make contact with cultural diversity in Human-Computer Interaction. These studies were conducted in the context of two crosscultural applications (one in a linguistic domain and one in a non-linguistic domain). Among the results obtained, it was highlighted the potential of the specific vocabulary in the interaction design cycle of cross-cultural systems, revealing that Cultural Viewpoint Metaphors are a promising supporting tool for participatory design practices, i.e., a medium of expression and communication for users to qualify their real or potential interaction experiences.

 
[14_PhD_lustosa]
Cecília Reis Englander LUSTOSA. On some relations between Natural Deduction and Sequent Calculus. [Title in Portuguese: Algumas relações entre Cálculo de Sequentes e Dedução Natural]. Ph. D. Thesis. Eng. Presentation: 08/07/2014. 104 p.  Advisors: Edward Hermann Haeusler and Gilles Dowek.

Abstract: Segerberg presented a general completeness proof for propositional logics. For this purpose, a Natural Deduction system was defined in a way that its rules were rules for an arbitrary boolean operator in a given propositional logic. Each of those rules corresponds to a row on the operator’s truth-table. In the first part of this thesis we extend Segerberg’s idea to finite-valued propositional logic and to non-deterministic logic. We maintain the idea of defining a deductive system whose rules correspond to rows of truth-tables, but instead of having n types of rules (one for each truth-value), we use a bivalent representation that makes use of the technique of separating formulas as defined by Carlos Caleiro and Jo˜ao Marcos. The system defined has so many rules it might be laborious to work with it. We believe that a sequent calculus system defined in a similar way would be more intuitive. Motivated by this observation, in the second part of this thesis we work out translations between Sequent Calculus and Natural Deduction, searching for a better bijective relationship than those already existing.

 
[14_PhD_costa]
Débora Mendonça Cardador Corrêa da COSTA. Um olhar crítico sobre o projeto de interfaces tangíveis baseado na Engenharia Semiótica. [Title in English: A critique of tangible user interface design based on Semiotic Engineering]. Ph.D. Thesis. Port. Presentation: 07/04/2014. 108 p.  Advisor: Hugo Fuks.

Abstract: With the embedding of computing resources into physical elements, computing is moving toward ubiquity (or pervasiveness) and is present throughout the physical environment. Homes, furniture, and everyday life objects are the interfaces with which people now interact. Such new interfaces harbinger a new interaction paradigm that is little known and exploited to date, such as Tangible User Interfaces (TUIs) that use physical artifacts for representing and manipulating digital information. Developing TUIs means acknowledging both concrete (form) and abstract (behavior) aspects of an interface. This work proposes a method called Collaborative Tangible Prototyping Based on Semiotics Engineering that combines prototyping and Semiotic Engineering approaches to tangible interfaces design. By combining these approaches, the method brings together the benefits of continued structured experimentation provided by prototyping with the advantages of a focus on communicability from Semiotic Engineering for designing tangibles. A case study is conducted to investigate whether the proposed method contributes to incorporate the Semiotic Engineering perspective in the design of tangible user interfaces.

 
[14_PhD_pecin]
Diego Galindo PECIN. Exact Algorithms for the capacitated vehicle routing problem. [Title in Portuguese: Algoritmos exatos para o problema de roateamento de veículos capacitado]. Ph.D. Thesis. Eng. Presentation: 25/04/2014. 75 p.  Advisors: Marcos Vinicius Soledade Poggi de Aragão e Eduardo Uchoa Barboza.

Abstract: Vehicle Routing Problems are among the most di_cult combinatorial problems to solve to optimality. They were proposed in the late 1950's and since then have been widely studied. This interest arises from their practical importance, as well as the di_culty of providing e_cient algorithms to solve them. This thesis is mainly concerned with the exact resolution of the Capacitated Vehicle Routing Problem (CVRP). In this problem, a set of customers, each one associated to a demand, must be serviced by a eet of vehicles. All vehicles have the same (limited) capacity and initially are located in the same central depot. A solution is a set of routes, starting and ending at the depot, that visit every customer exactly once. The only constraint on a route is that the sum of the demands of its customers does not exceed the vehicle capacity. The objective is to _nd a solution with minimum total cost. The best performing exact algorithms for the CVRP developed in the last 10 years are based in the combination of cut and column generation. Some authors only used cuts expressed over the variables of the original formulation, in order to keep the pricing subproblem relatively easy. Other authors could reduce the duality gaps by also using a restricted number of cuts over the Master LP variables, stopping when the pricing becomes prohibitively hard. A particularly e_ective family of such cuts are the Subset Row Cuts. This thesis introduces a technique for greatly reducing this impact on the pricing of these cuts, thus allowing much more cuts to be added. The newly proposed Branch-Cut-and-Price algorithm also incorporates and combines for the first time (often in an improved way) several elements found in previous works, like route enumeration, variable _xing and strong branching. All the instances used for benchmarking exact algorithms, with up to 199 customers, were solved to optimality. Moreover, some larger instances with up to 360 customers, only considered before by heuristic methods, were solved too.

 
[14_PhD_lima]
Edirlei Everson Soares de LIMA. Video-based interactive storytelling. [Title in Portuguese: Storytelling interativo baseado vídeo]. Ph.D. Thesis. Eng. Presentation: 04/08/2014. 218 p.  Advisor: Bruno Feijó.

Abstract: The generation of engaging visual representations for interactive storytelling represents a key challenge for the evolution and popularization of interactive narratives. Usually, interactive storytelling systems adopt computer graphics to represent the virtual story worlds, which facilitates the dynamic generation of visual content. Although animation is a powerful storytelling medium, live-action films still attract more attention from the general public. In addition, despite the recent progress in graphics rendering and the wide-scale acceptance of 3D animation in films, the visual quality of video is still far superior to that of realtime generated computer graphics. In the present thesis, we propose a new approach to create more engaging interactive narratives, denominated “Video-Based Interactive Storytelling”, where characters and virtual environments are replaced by real actors and settings, without losing the logical structure of the narrative. This work presents a general model for interactive storytelling systems that are based on video, including the authorial aspects of the production phases, and the technical aspects of the algorithms responsible for the real-time generation of interactive narratives using video compositing techniques.

 
[14_MSc_camara]
Eduardo Castro Mota CÂMARA. Um estudo sobre atualização dinâmica de componentes de software. [Title in English: A study of dynamic update for software components]. M.Sc. Diss. Eng. Presentation: 26/03/2014. 85 p.  Advisor: Noemi da La Roque Rodriguez.

Abstract: The component-based development of software systems consists on composing systems from ready and reusable sotfware units. Many software component systems on production, need to be available 24 hours a day 7 days a week. Dynamic updates allow systems to be upgraded without interrupting the execution of its services, applying the update at runtime. Many dynamic software update techniques in the literature use applications speci cally implemented to cover the presented points and only a few use a historical need of a real system. This work studies the main cases of updates that occur in a system of components with extensive use, the Openbus, which consists of an integration infrastructure responsible for communication of various applications for acquisition, processing and interpretation of data. In addition to this study, we implement a solution of dynamic software update to accommodate the needs of this system. After, using the implemented solution, we present an overhead test and applications of updates on Openbus.

  
[14_PhD_motta]
Eduardo Neves MOTTA. Indução e seleção incrementais de atributos no aprendizado supervisionado. [Title in English: Supervised learning incremental feature induction and selection]. Ph.D. Thesis. Eng. Presentation: 05/09/2014. 90 p.  Advisor: Ruy Luiz Milidiú.

Abstract: Non linear feature induction from basic features is a method of generating predictive models with higher precision for classification problems. However, feature induction may rapidly lead to a huge number of features, causing overfitting and models with low predictive power. To prevent this side effect, regularization techniques are employed to obtain a trade-off between a reduced feature set representative of the domain and generalization power. In this work, we describe a supervised machine learning approach that incrementally inducts and selects feature conjunctions derived from base features. This approach integrates decision trees, support vector machines and feature selection using sparse perceptrons in a machine learning framework named IFIS – Incremental Feature Induction and Selection. Using IFIS, we generate regularized non-linear models with high performance using a linear algorithm. We evaluate our system in two natural language processing tasks in two different languages. For the first task, POS tagging, we use two corpora, WSJ corpus for English, and Mac-Morpho for Portuguese. Our results are competitive with the state-of-the-art performance in both, achieving accuracies of 97.14% and 97.13%, respectively. In the second task, Dependency Parsing, we use the CoNLL 2006 Shared Task Portuguese corpus, achieving better results than those reported during that competition and competitive with the state-of-the-art for this task, with UAS score of 92.01%. Applying model regularization using a sparse perceptron, we obtain SVM models 10 times smaller, while maintaining their accuracies. We achieve model reduction by regularization of feature domains, which can reach 99%. Using the regularized model we achieve model physical size shrinking of up to 82%. The prediction time is cut by up to 84%. Domains and models downsizing also allows enhancing feature engineering, through compact domain analysis and incremental inclusion of new features.

 
[14_MSc_alvarenga]
Eduardo Pimentel de ALVARENGA. Identificação de caracteres para reconhecimento automático de placas veiculares. [Title in English: Optical character recognition for automated license plate recognition systems]. M.Sc. Diss. Eng. Presentation: 14/04/2014. 50 p.  Advisor: Ruy Luiz Milidiú.

Abstract: ALPR (Automatic License Plate Recognition) systems are commonly used in applications such as traffic control, parking ticketing, exclusive lane monitoring and others. The basic structure of an ALPR system can be divided in four major steps: image acquisition, license plate localization in a picture or movie frame; character segmentation; and character recognition. In this work we will focus solely on the recognition step. For this task, we used a multiclass Perceptron, enhanced by an entropy guided feature generation technique. We will show that it is possible to achieve results on par with the state of the art solution, with a lightweight architecture that allows continuous learning, even on low
processing power machines, such as mobile devices.

 
[14_MSc_goldner]
Eliana Leite GOLDNER. Um algoritmo de menor caminho em rastreamento de horizontes sísmicos. [Title in English: Evaluation of a short path algorithm for seismic horizon tracking]. M.Sc. Diss. Port. Presentation: 06/06/2014. 67 p.  Advisor: Marcelo Gattass.

Abstract: The manual interpretation of a seismic horizon is a time consuming process, which drives the research for automatic or semi automatic tracking methods. Among the known propositions that use correlation, there is a common limitation: the usage of local approaches to determine which samples belong to the horizon. This kind of approach performs well in data where there are no seismic faults. However, by using only local information, it is prone to error propagation in low coherency areas, which usualy corresponds to fault regions. The goal of this work is to evaluate the performance of shortest path algorithms as a solution for the horizont tracking problem. It intends to propose a global method that is robust to different seismic features.

 
[14_MSc_ching]
Elias Fukim Lozano CHING. An algorithm to generate random sphere pack in arbritary domains. [Title in Portuguese: Um algoritmo de geração randômica de esferas em domínios arbitrários]. M.Sc. Diss. Eng. Presentation: 21/08/2014. 75 p.  Advisor: Marcelo Gattass.

Abstract: The Discrete Element Method (DEM) based on spheres can provide acceptable approximations to many complex physical phenomena both in micro and macro scales.  Normally a DEM simulation starts with an arrangement of spherical particles pack inside a given container.  For general domains the creation of the sphere pack may be complex and time consuming, especially if the pack must comply with accuracy and stability requirements of the simulation.  The objective of this work is to extend a 2D disck packing solution to generate random assenplies composed by non-overlapping spherical particles. The constructive algorithm, presented here, uses the advancing front strategy when spheres are inserted one-by-one in the according to the greed strategy based on the previously inserted particles. Advance front strategy requires the existence of an initial set of spheres that defines the boundary of the pack region.  Another important extension presented here is the generalization of algorithm to deal with arbritary objects defined by a triangular boundary mesh.  This wok presents also some results that allow for some conclusions and suggestions of further work.

 
[14_PhD_monsalve]
Elizabeth Suescún MONSALVE. Uma abordagem para transparência pedagógica usando aprendizagem baseada em jogos. [Title in English: An approach for pedagogy transparency using games based learning]. Ph.D Thesis. Eng. Presentation: 03/04/2014. 256 p.  Advisor: Júlio Cesar Sampaio do Prado Leite.

Abstract: This thesis is about a transparency vision anchored in the information disclosure principle. Transparency emerges as an important issue that aims to make students aware of educational processes and contents. This research purpose is to study pedagogical transparency in the context of the use of game based learning (GBL). Transparency in pedagogy aims to improve the quality of teaching, and the relationship between student, teacher and teaching methods. In the GBL context, transparency use appears through an experiment that allows a better comprehension of the results obtained, providing support on the educational effect of the use of games. Evaluations with different groups of students were carried out to determinate the effectiveness of the proposal and the results indicated the efficacy of this approach, where i* concurred to pedagogical transparency.

  
[14_PhD_guimaraes]
Everton Machado GUIMARÃES.
A blueprint-based approach for prioritizing and ranking critical code anomalies. [Title in Portuguese: ]. Ph.D. Thesis. Eng. Presentation: 12/09/2014. 142 p.  Advisor: Alessandro Fabricio Garcia.

Abstract: Software systems are often evolving due to many changing requirements. As the software evolves, it grows in size and complexity, and consequently, its architecture design tends to degrade. Architecture degradation symptoms are often a direct consequence of the progressive insertion of code anomalies in the software implementation. A code anomaly is a recurring implementation structure that possibly indicates deeper architectural design problems. Code anomaly is considered critical when it is related with a structural problem in the software architecture. Its criticality stems from its negative influence on a wide range of non-functional requirements. For instance, the presence of critical code anomalies hinders software maintainability, i.e. these critical anomalies require wide refactoring in order to remove an architectural problem. Symptoms of architecture degradation have often to be observed in the source code due to the lack of an explicit, formal representation of the software architecture in a project. Many approaches are proposed for detecting code anomalies in software systems, but none of them efficiently support the prioritization and ranking of critical code anomalies according to their architecture impact. Our work investigates how the prioritization and ranking of such critical code anomalies could be improved by using blueprints. Architecture blueprints are usually provided by software architects since the early stages of the system development. Blueprints are informal design models usually defined to capture and communicate key architectural design decisions. Even though blueprints are often incomplete and inconsistent with respect to the underlying implementation, we aim to study if their use can contribute to improve the processes of prioritizing and ranking critical code anomalies. Aiming to address these research goals, a set of empirical studies has been performed. We also proposed and evaluated a set of heuristics to support developers when prioritizing and ranking code anomalies in 3 software systems. The results showed an average accuracy higher than 60% when prioritizing and ranking code anomalies associated with architectural problems in these systems.

 
[14_MSc_cunha]
Francisco José Plácido da CUNHA. JAT4BDI: uma nova abordagem para testes de agentes deliberativos. [Title in English: JAT4BDI: a new approach to testing deliberative agents]. M.Sc. Diss. Eng. Presentation: 17/12/2014. 82 p.  Advisor: Carlos José Pereira de Lucena.

Abstract: The growth and popularity of the Web has fueled the development of software-based network. The use of multi-agent systems (MAS) in this context is considered a promising approach has been applied in different areas such as security, or mission critical business scenarios, enhanced monitoring of environments and people, etc., which means analyzing the choices that this type of software can become crucial. However, the methodologies proposed so far by the Software Engineering Oriented Agents (AOSE) focused their efforts mainly on developing disciplined approach to analyze, design and implement an SMA and little attention has been given to how such systems can be tested. Furthermore, with regard to tests involving software agents, some issues related to the controllability and observability difficult the task of checking the behavior, such as: (i) the duration of the agent in its decision-making process; (ii) the fact of the agent's beliefs and goals are embedded in the agent itself, hampering the observation and control of behavior; (iii) problems associated with test coverage. In this research a novel approach for unit testing of agents written in BDI4JADE BDI based on the combination and arrangement of ideas supported by JAT Framework, a framework for testing agents written in JADE and fault model proposed by Zhang is displayed.

 
[14_PhD_moreira]
Gustavo Costa Gomes MOREIRA. Um método para a detecção em tempo real de objetos em vídeos de alta definição. [Title in Portuguese: A method for real-time object detection in HD videos]. Ph.D. Thesis. Eng. Presentation: 08/04/2014. 85 p.  Advisor: Bruno Feijó.

Abstract: The detection and susequent tracking objects in video sequences is a challenge in terms of video processing in real time.  In this thesis we propose a detection method suitable for processing high-definition video in real-time.  In this method we use a segmentation procedure through integral image of the foreground, which allows a very quick disposal of various parts of the image in each frame, thus achieving a high rate of processed frames per second. Further we extend the proposed method to be able to detect multiple objects in parallel.  Furthermore, by using a GPU and techniques that can have its performance enhanced through parallelism, as the operator prefix sum, we can achieve an even better performance of the algorithm, both for the detection of the object, as in the training stage of new classes of objects.

 
[14_MSc_gallegosvergara]
Henry Giovanny GALLEGOS VERGARA. Visualização de variedades implícitas de dimensão 3 no R4. [Title in English: Visualization of 3{dimension implicit manifolds]. M.Sc Diss. Port. Presentation: 22/08/2014. 55 p.  Advisor: Hélio Cortes Vieira Lopes.

Abstract: The main objective of this work is to present a new method for the visualization of implicit 3-manifolds in R4. This method consists primarily of a preprocessing in the CPU using a 16-tree and Interval Arithmetic to detect regions of the domain where the variety is present. These data are then processed in the GPU to perform the visualization, and for this a generalization of Ray Casting technique was adopted.


[14_PhD_cunha]
Herbert de Souza CUNHA. Desenvolvimento de software baseado em requisitos. [Title in Portuguese: Aware software development based on requirements]. Ph.D Thesis. Eng. Presentation: 11/03/2014. 215 p.  Advisor: Júlio Cesar Sampaio do Prado Leite.

Abstract:
Software awareness has become an important requirement in the construction of self-adaptive systems. As such, the software should better adapt to changes in the various environments in which they operate, be aware of (in the sense of perceiving and understanding) these environment and be aware of its own operation in these environments. However, even at a basic level applied to software, awareness is a requirement difficult to define. Our work proposes the creation of a catalog to the awareness requirement through non-functional requirements patterns (NFR patterns). We also propose mechanisms for enabling the instantiation and use of the knowledge about awareness, represented in this catalog.


[14_PhD_canepavega]
Katia Fabiola CÁNEPA VEGA.
Beauty technology as an interactive computing Platform. [Title in Portuguese: Beauty Technology como uma plataforma de computação interativa]. Ph.D Thesis. Eng. Presentation: 26/02/2014. 81 p.  Advisor: Hugo Fuks.

Abstract: This work introduces the term Beauty Technology as an emergent field in Wearable Computing. Wearable Computing had changed the way individuals interact with computers, interwinning natural capabilities of the human body in an interactive platform by hiding technology into beauty products for creating muscle based interfaces that don't give the wearer a cyborg look.  Several applications of beauty technolgies used in everyday products and shown in exhibitions expose the feasibility of this technology.  Conductive Makeup, Beauty Tech Nails, FX- e-makeup and Hairware exemplify Beauty Technology prototypes.


[14_MSc_santos]
Leandro Tavares Aragão dos SANTOS. Geração de mapas de profundidade super-resolvidos a partir de sensores de baixo custo e imagens RGB
. [Title in English: Superresolved depth maps using low cost sensors and RGB images]. M.Sc. Diss. Eng. Presentation: 27/02/2014. 75 p.  Advisor: Alberto Barbosa Raposo.

Abstract: There are a lot of three dimensions reconstruction applications of real scenes. The rise of low cost sensors, like the Kinect, suggests the development of systems cheaper than the existing ones. Nevertheless, data provided by this device are worse than that provided by more sophisticated sensors. In the academic and commercial world, some initiatives, described in Tong and in Cui, try to solve that problem. Studying that attempts, this work suggests the modification of super-resolution algorithm described for Mitzel in order to consider in its calculations coloured images provided by Kinect, like the approach of Cui. This change improved the super resolved depth maps provided, mitigating interference caused by sudden changes of captured scenes. The tests proved the improvement of generated maps and analysed the impact of CPU and GPU algorithms implementation in the superresolution step. This work is restricted to this step. The next stages of 3D reconstruction have not been implemented.

 
[14_PhD_silva]
Lincoln David Nery e SILVA. A scalable middleware for structured data provision and dissemination in distributed mobile systems. [Title in Portuguese: Um middleware escalável para a provisão e disseminação de dados estruturados em sistemas distribuídos móveis]. Ph.D. Thesis. Eng. Presentation: 11/05/2014. 110 p.  Advisor: Markus Endler.

Abstract: Aplications such as vehicle fleet monitoring and logistic systems, emergency response coordination, environmental monitoring or mobile workforce management, employ mobile networks as means of communication, information sharing and coordination among a possibly very large set of mobile nodes interconnected by a Wide Area Network (WAN). The majority of those systems thus requires real-time tracking of the mobile nodes context information, interaction with all participant nodes, as well as means of adaptability in a very dynamic scenario, where it is not possible to predict when, where and for how long the nodes will remain connected. Despite being a subject of much research, current solutions still lack essential features required for communication with mobile nodes, such as reliable message delivery, handover support, resilience to interm ittent connectivity, IP address changes and firewall transversal. This thesis proposes a data management model that enables deployment of a network of Data Provider components with reliable and on - time dissemination and transformation of information among thousands of mobile nodes interconnected through wireless internet. Performance tests indicate that our model scales to thousands of mobile nodes and supports reliable, high throughput and on -time data
dissemination between several thousands of mobile Data Providers and Data Consumers.

 
[14_MSc_ferreira]
Manuele dos Reis FERREIRA. Detecção de anomalias de código de relevância arquitetural em sistemas de multilinguagem. [Title in English: Detecting architecturally-relevant code anomalies on multi-language systems]. M.Sc. Diss. Eng. Presentation: 11/04/2014. 101 p.  Advisor: Alessandro Fabrício Garcia.

Abstract: Recent studies show that the systems are designed with at least four languages.  Using these languages, best practices to development are also different.  These aspects of heterogeneity make it difficult to design solutions that support developers activities on developing multi-language systems with quality. In particular, several approaches have emerged with the aim to assist analysts in comprehension and maintaining systems.  However, there is still a lack of approaches focused on detection of code anomaly on multi-language systems. Thus, the aim of this work is to support the identification of symptoms of architectural degradation through the use of metrics-based strategies on multi-language systems.

 
[14_PhD_quispecruz]
Marcela QUISPE CRUZ. Some results in proof-theory based on graphs. [Title in Portuguese: ]. Ph. D. Thesis. Eng. Presentation: 17/12/2014.  90 p.  Advisor: Edward Hermann Haeusler and Lew Gordeev (Universität Tubingen, Germany).

Abstract: Traditional proof theory of Propositional Logic deals with proofs which size can be huge.  Proof theoretical studies discovered sponential gaps between normal or cut free proofs and their respective non-normal proofs.  Thus, the use of proof-graphs, instead of trees or lists, for representing proofs is getting popular among proof-theoreticians.  Proof-graphs serve as a way to provide a better symmetry to the semantics of proofs and a way to sutdy complexity of propositional proofs and to provide more efficient theorem provers, concerning size of propositional proofs.  The aim of this work is to reduce the weight/size of deductions.  We present formalisms of proof-graphs that are intended to capture the logical structure of a deduction and a way to facilitate the visualization.  The advantage of these formalism is that formulas and sub-deductions in Natural Deduction, preserved in the graph structure, can be shared deleting unnecessary sub-deductions resulting in the reduced proof. In this work, we give a precise definition of proof-graphs for purely implicational logic, then we extend this result to full propositional logic and show how to reduce (eliminating maximal formulas) these representations such that a normalization theorem can be proved by counting the number of maximal formulas in the original derivation.  The strong normalization will be a direct consequence of such normalization, since that any reduction decreases the corresponding measures of derivation complexity.  Continuing with our aim of studying the complexity of proofs, the current aproach also give graph representations for first order logic, deep inference and bi-intuitionistic logic.

 
[14_MSc_mota]
Marcelle Pereira MOTA.  PoliFacets: um modelo de design da metacomunicação de documentos ativos para apoiar o ensino e aprendizado de programação. [Title in English: PoliFacets: a design model for the metacommunication of active documents to support teaching and learning of computer programming]. M.Sc. Diss. Port. Presentation: 14/04/2014. 202 p.  Advisor: Clarisse Sieckenius de Souza.

Abstract: Nowadays, there is a need to use technology to effect citizen participation in society. Users are no longer only passive software consumers and a growing share of them are using technology as a medium to express new ideas and opportunities. In a democratic future scenario, the more people can manifest themselves through the effective and efficient use of technology, the lower the risk that those who can do it determine what others will do. However, the process of teaching and learning computational thinking, which is the basic skill for self-expression through software, is a big challenge. Teachers need to learn computational concepts themselves before they can teach them to students. In elementary and high school they generally do not have support for teaching this kind of content. This thesis presents a model for the design of active documentation which aims at supporting the teaching and learning of computational thinking. The model is based on Semiotic Engineering theory and its instantiation in a real scenario came about as an active document used in several empirical studies during three years with elementary and high schools in the metropolitan region of Rio de Janeiro. Technically and scientifically, the main contribution of this thesis is an epistemic tool for structure analyses and decisions during the design of metacommunication of active documents to support the teaching and learning of self-expression through software.

  
[14_PhD_camanho]
Marcelo de Mello CAMANHO. A model for stream-based interactive storytelling. [Title in Portuguese: Um modelo para storytelling interativo baseado em streaming de vídeo]. Ph.D. Thesis. Eng. Presentation: 11/04/2014. 92 p.  Advisor: Bruno Feijó.

Abstract:  In this thesis we presnt a highly scalable architecture for massive multi-user storytelling systems based on video streams.  The proposed architecture can supprt different demands for interactivty, generation, and visualization of stories in digital television environments, which include TV set-top boxes, tablets, smartphones, and computers. In this architecture, the same stories adapts itself to the spectators's device in terms of rendering and interface processes automatically.  Also, a model for sharing massive interactive stories is presented. Moreover, the proposed system preserves the logical coherence of the story that unfolds while keeping it interactive.

 
[14_MSc_coutinho]
Marcelo Novaes COUTINHO. Um processo de gerência de estratégia de rastreabiliade: uma caso em ambiente Oracle. [Title in English: A Process for defining traceability strategies: a case in Oracle Environment ]. M.Sc. Diss. Eng. Presentation: 04/09/2014. 66 p.  Advisor: Julio Cesar Sampaio do Prado Leite and Soeli Terezinha Fiorini.

Abstract: Effective requirements traceability supports higher project maturity and better product quality. Researchers argue that traceability must be explicitly defined in advance and to be effective. In addition, studies show that professionals rarely follow explicit traceability strategies. An explicit traceability strategy should at least define the artifacts to be traced and the traits being created between them. Usually in a development environment of Oracle procedures, a traceability strategy is rare, especially between the requirements specification and the code, which makes code maintenance very expensive. This work presents a proposal of a process that facilitates the explicit definition of traceability strategies and activities necessary to the use of traceability. The process also includes traceability project and validation strategy activities. As a case study, an instantiation process in a development environment Oracle is performed using the proposed process.
 

 
[14_MSc_rosemberg]
Marcio Ricardo ROSEMBERG. SRAP - a new authetication protocol for semantic Web applications. [Title in Portuguese: SRAP - um novo protocolo para autenticação em aplicações voltadas para a Web Semântica]. M.Sc. Diss. Eng. Presentation: 16/06/2014. 103 p.  Advisors: Marcus Vinicius Soledade Poggi de Aragão and Daniel Schwabe.

Abstract: Usually, linked data makes semantic Web applications query much more information for processing than traditional Web applications. Since not all information is public, some form of authentication may be imposed on the user. Querying data from multiple data sources might require many authentication prompts. Such time consuming operations , added to the extra amount of time a Semantic Web application needs to process the data it collects might be frustrating to the users and should be minimized. The purpose of this thesis is to analyze and compare several Semantic Web authentication techniques available, leading to the proposal of a faster and more secure authentication protocol for Semantic Web Applications.

 
[14_MSc_rosas]
Mauricio Arieira ROSAS. Estudo da aplicação de componentes hiereráquicos em um sistema de captura e acesso. [Title in English: A study of hierarchical component in a Capture and Access System]. M.Sc. Diss. Eng. Presentation: 29/08/2014. 66 p.  Advisor: Noemi de La roque Rodriguez.

Abstract: The aim of this work is to evaluate a software component system that provides in its model an abstraction of composite components. We chose for this study SCS as the components system, which defines a set of rules for nesting, encapsulation and sharing to manage the behavior of its composite components. The main focus of this study is to evaluate the effectiveness of these composition rules to assist application developers. To conduct this evaluation, we adapted the Capture and Access system CAS, developed via the middleware SCS in order to employ the composite components, and created a scenario to analyze the model and implementation of the middleware.

 
[14_MSc_germano]
Pedro Boechat de Almeida GERMANO. Geração de malhas rodoviárias na GPU. [Title in English:
Road Network Generation on the GPU]. M.Sc. Diss. Eng. Presentation: 28/03/2014. 94 p.  Advisor: Alberto Barbosa Raposo.

Abstract:
The first stage in the pipeline of a procedural city generation system is typically the generation of the road network. This work presents a parallel algorithm for road networks generation on the GPU, using a work-queue based execution model. This algorithm receives declarative parameters along with geographic and socio-statistical maps and produces a high level representation of an urban road network.

 
[14_MSc_rocha]
Pedro de Goes Carnaval ROCHA. Um mecanismo baseado em logs com meta-informações para a verificação de contratos em sistemas distribuídos. [Title in English: A mechanism based on logs with meta-information for the verification of contracts in distributed systems.]. M.Sc. Diss. Port. Presentation: 21/08/2014. 64 p.  Advisors: Arndt von Staa.

Abstract: Software contracts can be written as assertions that identify failures observed while using the software. Software contracts can be implemented through executable assertions. However, conventional assertions are not directly applicable in distributed systems, as they present difficulties to evaluate temporal expressions, as well as expressions involving properties of different processes. This work proposes a mechanism based on logs with meta-information to evaluate contracts in distributed systems. A grammar to write contracts enable temporal operations, e.g., allows specifying conditions between events at different timestamps, or even guaranteeing a sequence of events over a period of time. The flow of events is evaluated asynchronously in relation to the system execution, by comparison with contracts, previously written according to the grammar, representing the expectations on the behavior of the system.

 
[14_MSc_assis]
Pedro Henrique Ribeiro de ASSIS. Distant supervision for relation extraction using ontology class hierarchy-based features. [Title in Portuguese: Supervisão à distância em extração de relacionamentos usando características baseadas em hierarquia de classes em ontologias]. M.Sc. Diss. Eng. Presentation: 20/03/2014. 64 p.  Advisor: Marco Antonio Casanova.

Abstract: Relation extraction is a key step for the problem of rendering a structure from natural language text format. In general, structures are composed by entities and relationships among them. The most successful approaches on relation extraction apply supervised machine learning on hand-labeled corpus for creating highly accurate classifiers. Although good robustness is achieved, hand-labeled corpus are not scalable due to the expensive cost of its creation. In this work we apply an alternative paradigm for creating a considerable number of examples of instances for classification. Such method is called distant supervision. Along with this alternative approach we adopt Semantic Web ontologies to propose and use new features for training classifiers. Those features are based on the structure and semantics described by ontologies where Semantic Web resources are defined. The use of such features has a great impact on the precision and recall of our final classifiers. In this work, we apply our theory on corpus extracted from Wikipedia. We achieve a high precision and recall for a considerable number of relations.

 
[14_MSc_guillon]
Raquel Jauffret GUILLON. Um mecanismo baseado em logs com meta-informações para a verificação de contratos em sistemas distribuídos. [Title in English: A mechanism based on logs with meta-information for the verification of contracts in distributed systems.]. M.Sc. Diss. Port. Presentation: 22/08/2014. 64 p.  Advisors: Arndt von Staa.

Abstract: In software testing stage, faults can be revealed and then diagnosed to identify defects that caused it. Tests should ideally be applied from the unit level to the higher level of software, such as system testing. In one of these levels resides GUI (Graphical User Interface) testing. Ensuring the correct operation of the GUI on the state of its elements after various user events is as important as the other layers tests, since GUI is a direct way to interact with the application, being the feature that most influences how the experience will be qualified by the end user. This paper proposes a Model-Based Testing (MBT) approach using high-level Petri Nets (RP) to represent graphical user interface. Petri Net is a modeling tool and a mathematical specification language that graphically defines the structure of systems, specially the concurrent ones. An important feature of RPs is that they can be simulated, allowing one to observe the behavior of the system and to obtain the generation of test cases from the paths executed in the simulation. The generation of test suites for GUI from the Petri Net model was investigated. For this, we considered the relationship between user actions and resulting states in the GUI, realizing how an RP can model them. A support tool was developed so that, from the simulations of Petri Net, test suites were generated in C plus plus language, making it possible to auto-run them on a study software. Finally, the Mutation Analysis test criterion, which measures the effectiveness of the suite generated from RP, was employed as a means of validation of this work.

 
[14_MSc_pinheiro]
Rodrigo Braga PINHEIRO. Um estudo sobre técnicas de renderização do fenômeno de dispersão atmosférica. [Title in English: A sudy about rendering techniques of the atmospheric scatrering phenomenon]. M.Sc. Dissertation. Port. Presentation: 18/03/2014. 103 p.  Advisor: Alberto Barbosa Raposo.

Abstract: One of the greatest challenges of computer graphics is to represent virtual environments that resemble real environments observed by humans in their day-to-day. In order to achieve this representation, several studies have been conducted in the area of photorealistic rendering, and especially in regards to physical modelling and representation of natural phenomena. This study aims to explain and analyze techniques that represent the phenomenon of atmospheric scattering; responsible for setting the color of atmosphere/sky. To achieve this goal, two techniques are presented: one based on a physical model and the other based on an analytical model and approximations. This study presents the details of each technique and a comparative to help in choosing the technique according to the need and requirements of the application that will represent the phenomenon of atmospheric scattering.

 
[14_MSc_maues]
Rodrigo de Andrade MAUÉS. Keep doing what I just did: automating smartphones by demonstration. [Title in Portuguese: Keep
doing what I just did: automatizando Smartphone por demonstração]. M.Sc. Diss. Eng. Presentation: 14/03/2014. 97 p.  Advisor: Simone Diniz Junqueira Barbosa.

Abstract: Smartphones have become an integral part of many people’s lives. We can use these powerful devices to perform a great variety of tasks, ranging from making phone calls to connecting to the Internet. However, sometimes we would like some tasks to be performed automatically. These tasks can be automated by using automation applications, which continuously monitor the smartphone’s context to execute a sequence of actions when an event happens under certain conditions. These automations are starting to get popular with end users, since they can make their phones easier to use and even more battery efficient. However, little work has been done on empowering end users to create such automations. We propose an approach for automating smartphone tasks by retrospective demonstration. Succinctly, we consider the logic behind the approach as “keep doing what I just did”: the automation application continuously records the users’ interactions with their phones, and after users perform a task that they would like to automate, they can ask the application to create an automation rule based on their latest recorded actions. Since users only have to use their smartphones, as they would naturally do, to demonstrate the actions, we believe that our approach can lower the barrier for creating smartphone automations. To evaluate our approach, we developed prototypes of an application called Keep Doing It, which supports automating tasks by demonstration. We conducted a lab user study with the first prototype to gather participants’ first impressions. The participants created automation rules using our application based on given scenarios. Based on their feedback and on our observations, we refined the prototype and conducted a five-day remote user study with new participants, who could then create which and how many rules they wanted. Overall, the findings of both studies suggest that, although there were some occasional inaccuracies (especially when demonstrating rules that contain conditions), participants would be willing to automate smartphone tasks by demonstration due to its ease of use. We concluded that this approach has much potential to aid end users to automate their smartphones, but there are still issues that need to be addressed by further research.

 
[14_MSc_marques]
Thiago Manhente de Carvalho MARQUES. Reengenharia de uma aplicação científica para inclusão de conceitos de workflow. [Title in Portuguese: ]. M.Sc. Diss. Eng. Presentation: 18/12/2014.  p.  Advisor: Carlos José Pereira de Lucena.

Abstract:
The use of workflow techniques in scientific computing is widely adopted in the execution of experiments and building in silico models. By analysing some challenges faced by a scientific application in the geosciences domain, we noticed that workflows could be used to represent the geological models created using the application so as to ease the development of features to meet those challenges. Most works and tools on the scientific workflows domain, however, are designed for use in distributed computing contexts like web services and grid computing, which makes them unsuitable for integration or use within simpler scientific applications. In this dissertation, we discuss how to make viable the composition and representation of workflows within an existing scientific application. We describe a conceptual architecture of a workflow engine designed to be used within a stand-alone application. We also describe an implementation model of this architecture in a C++ application using Petri nets to model a workflow and C++ functions to represent tasks. As proof of concept, we implement this workflow model in an existing application and studied its impact on the application.

 
[14_PhD_araujo]
Thiago Pinheiro de ARAÚJO. Using runtime information and maintenance knowledge to assist failure diagnosis, detection and recovery]. [Title in Portuguese: ]. Ph. D. Thesis. Eng. Presentation: 17/10/2014. 97 p.  Advisor: Arndt von Staa.

Abstract: Even software systems developed with strict quality control may expect failures during their lifetime. When a failure is observed in a production environment the maintainer is responsible for diagnosing the cause and eventually removing it. However, considering a critical service this might demand too long a time to complete, hence, if possible, the failure signature should be identified in order to generate a recovery mechanism to automatically detect and handle future occurrences until a proper correction can be made. In this thesis, recovery consists of restoring a correct context allowing dependable execution, even if the causing fault is still unknown. To be effective, the tasks of diagnosing and recovery implementation require detailed information about the failed execution. Failures that occur during the test phase run in a controlled environment, allow adding specific code instrumentation and usually can be replicated, making it easier to study the unexpected behavior. However, failures that occur in the production environment are limited to the information present in the first occurrence of the failure. But run time failures are obviously unexpected, hence run time data must be gathered systematically to allow detecting, diagnosing with the purpose of recovering, and eventually diagnosing with the purpose of removing the causing fault. Thus there is a balance between the detail of information inserted as instrumentation and the system performance: standard logging techniques usually present low impact on performance, but carry insufficient information about the execution; while tracing techniques can record precise and detailed information, however are impracticable for a production environment. This thesis proposes a novel hybrid approach for recording and extracting system’s runtime information. The solution is based on event logs, where events are enriched with contextual properties about the current state of the execution at the moment the event is recorded. Using these enriched log events a diagnosis technique and a tool have been developed to allow event filtering based on the maintainer’s perspective of interest. Furthermore, an approach using these enriched events has been developed that allows detecting and diagnosing failures aiming at recovery. The proposed solutions were evaluated through measurements and studies conducted using deployed systems, based on failures that actually occurred while using the software in a production context.

 
[14_PhD_barroso]
Vitor Barata Ribeiro Blanco BARROSO. Simulação eficiente de fluidos no espaço paramétrico de malhas estruturadas tridimensionais. [Title in English: Efficient fluid simulation in the parametric space of three-dimensional structured grids]. Ph.D. Thesis. Port. Presentation: 19/02/2014. 85 p.  Advisor: Waldemar Celes Filho.

Abstract: Fluids are extremely common in our world and play a central role in many natural phenomena. Understanding their behavior is of great importance to a broad range of applications and several areas of research, from blood flow analysis to oil transportation, from the exploitation of river flows to the prediction of tidal waves, storms and hurricanes. When simulating fluids, the so-called Eulerian approach can generate quite correct and precise results, but the computations involved can become excessively expensive when curved boundaries and obstacles wit\h complex shapes need to be taken into account. This work addresses this problem and presents a fast and straightforward Eulerian technique to simulate fluid flows in three-dimensional parameterized structured grids. The method’s primary design goal is the correct and efficient handling of fluid interactions with curved boundary walls and internal obstacles. This is accomplished by the use of per-cell Jacobian matrices to relate field derivatives in the world and parameter spaces, which allows the Navier-Stokes equations to be solved directly in the latter, where the domain discretization becomes a simple uniform grid. The work builds on a regular-grid-based simulator and describes how to apply Jacobian matrices to each step, including the solution of Poisson equations and the related sparse linear systems using both Jacobi iterations and a Biconjugate Gradient Stabilized solver. The technique is implemented efficiently in the CUDA programming language and strives to take full advantage of the massively parallel architecture of today’s graphics cards.

 
[14_MSc_maciel]
Walther Alexandre Giglio Lourenço MACIEL. Um estudo sobre o realce de atributos de falha em dados sísmicos baseado em modelos de colônia de formiga. [Title in English: A study about the enhancement of fault attributes in seismic data based on ant colony models]. M.Sc Diss. Port. Presentation: 05/06/2014. 71 p.  Advisor: Marcelo Gattass.

Abstract: The interpretation of seismic faults is a complex and labourious task, which is dependent on the experience of the geologist. The interpretation is normally aided by seismic attributes. However, they may not be enough for a clear visualization nor to be used in automatic extraction methods. This dissertation accomplishes an examination of the state of the art ACO algorithms for fault enhancement. This study reveals the importance, contributions and weaknesses of each step of these methods. From there, a new method is proposed, which eliminates some of the problems found, acquiring a more stable and quick convergence of the end result.